We start off our discussion on sensor accuracy by defining some important terms. Accuracy is the difference between the output value of the sensor and the reference value of the output as measured by a perfect calibration standard. The accuracy spec consist of a systematic error due to statistical bias of the sensor and a random error, the result of statistical variability. Systematic errors are predictable and repeatable. They may result from the nonlinearity of the sensor from zero or span error, or from using it at temperatures above or below the nominal calibration temperature. Systematic errors may be reduced by compensation methods such as those discussed in our video on temperature compensation. Random errors are not predictable on an individual basis. We use statistical means to characterize them. Random errors and the sensor may result from sensor hysteresis, or from the calibration process for the sensor. The precision of a measurement system is a degree to which repeated measurements by a consistent calibration process show the same results. Precision may be represented by a bell-shaped normal curve as shown in the graph on the left. You should all be familiar with the normal curve by now. It's the curve that college professors like me use to give you your grades in school. Measurement system can be accurate but not precise, precise but not accurate, not accurate or precise, or both accurate and precise. This is shown by the graph on the right. If your sensor contains a systematic error, then increasing the sample size of measurements during calibration increases its precision but not its accuracy. In that case, your sensor would read a consistent string of inaccurate measurements. A sensor is well-calibrated if the calibration system is precise and if the systematic errors of the sensor readings are small. Resolution is the smallest measurement a sensor can reliably indicate. A sensing system with high measurement resolution distinguishes an input change at a low signal level from sensor noise. For the graph on this slide, a measurement system can be one, accurate with high resolution. Two, inaccurate with high resolution. Three, inaccurate with low resolution. Or four, accurate with low resolution. You want to buy the sensor whose output is type one. The resolution of the analog to digital converter in a sensor signal chain is the most important factor in determining the resolution of the sensor system. ADCs can only display a finite amount of representations on their output, determined by the input full scale of the converter, divided by twice the number of bits in the ADC. The more bits an ADC has, the higher the resolution of the system. Let's go through an example of how to calculate the resolution of an ADC in a sensor signal system. Suppose you choose a 16-bit ADC for your system. You can have two to the 16 or 65,536 possible outputs, but with finite amounts of error. Suppose the full scale input is five volts peak to peak. Then it would have a least significant bit size of 0.076 millivolts and an error of 0.038 millivolts. In our videos on amplifiers, we discussed offset and gain errors, and an ADC suffers from the same types of errors. Recall that offset error is the analog value by which the transfer function fails to pass through zero. Gain error is the difference in full scale value between the ideal and actual transfer function when offset error is zero. ADCs also have linearity errors, the deviation from the straight line drawn between zero scale and full scale. These inaccuracies reduce the Signal-to-Noise Ratio, SNR, well below the ideal SNR. As one example, we calculate 98 for the ideal SNR of a 16-bit ADC. Yet the data for one commercial SNR is only 78. Linearity errors in an ADC consist of a Differential Nonlinearity, DNL and Integral Nonlinearity, INL. DNL is the deviation difference between two adjacent codes from the ideal voltage value of the Least Significant Bit, LSB. INL is defined as the curvature deviation from a straight line between zero and full scale. INL determines the Spurious-Free Dynamic Range, SFDR performance of the ADC. SFDR is defined as the strength ratio of the fundamental signal to the strongest spurious signal in the output as shown in the graph on the left. The shape of the curvature deviation determines the dominant harmonic performance of the ADC. It is typically frequency-dependent as shown in the spec on the right. So far, we've looked at accuracy errors in the sensor but another large source of error is found in the calibration process and equipment that you use for the sensor. You can quantify this error with a method called an uncertainty analysis, part of a larger body of work called measurement uncertainty. Measurement uncertainty; this is statistical term expressing the randomness of a series of measurements. It is typically expressed in terms of the standard deviation of the measurement values. The lower the standard deviation, the less measurement uncertainty you have in your reading. The manufacturer issuing a calibration certificate must have a statement of the measurement uncertainty inherent in its calibration process. In order to calculate your measurement uncertainty, you perform an uncertainty analysis on your calibration equipment and process. Sample data for an uncertainty analysis is shown in this slide. You start by identifying possible sources of uncertainty for each step in your measurement. You determine the magnitude of each contributing factor either through data reduction or mathematical analysis. If you purchase calibration equipment for your lab, obtain the spec sheets for each piece of equipment you use in the measurement. These spec sheets show their accuracy and resolution specs which become subsets of your specs for the calibration process. Document the data in an uncertainty budget using a spreadsheet as shown in this slide. You choose the appropriate probability distribution for each uncertainty detail. The divisor in the slide depends on the factor that divides the distribution within each type of distribution curve. For example, if you have a uniform in other words, rectangular probability distribution curve for your measurement, your divisor is the square root of three which equals 1.73, and or of the raw uncertainty percentages using the RMS method per the equation in the lower right corner. Recall that we also use the RMS method to calculate the accuracy of the sensor hardware which included errors due to linearity zero and span offset as well as thermal zero and span offset. Per the guidelines of the ISO 17025 Standard, you then double the uncertainty percentage to achieve 95 percent confidence with your stated uncertainty. Let's recap. This ends our module on sensor characterization. In the next module, we will study how manufacturers build sensors and guarantee their reliability in the field.