There are smart and conventional instruments and the calibration procedure of these types of instruments are entirely different in this session we are going to discuss about the difference between calibration of a HART &conventional instrument
CALIBRATING A CONVENTIONAL INSTRUMENT
For a conventional 4-20 mA instrument, a multiple point test that stimulates the input and measures the output is sufficient to characterize the overall accuracy of the transmitter. The normal calibration adjustment involves setting only the zero value and the span value, since there is effectively only one adjustable operation between the input and output as illustrated below
This procedure is often referred to as Zero and Span Calibration. If the relationship between the input and output range of the instrument is not linear, then you must know the transfer function before you can calculate the expected outputs for each input value. Without knowing the expected output values, you can not calculate the performance errors
CALIBRATING A HART INSTRUMENT
For a HART instrument, a multiple point test between input and output does not provide an accurate representation of the operation of the transmitter. Like a conventional transmitter, the measurement process begins with a technology that converts a physical quantity into an electrical signal. However, the similarity ends there. Instead of a purely mechanical or electrical route between the input and the resulting output signal of 4-20 mA, a HART transmitter has a microprocessor that manipulates the input data. As shown in Figure , there are normally three calculation sections involved, and each of these sections can be tested and adjusted individually before the first box, the microprocessor of the instrument measures some electrical property that is affected by the variable of process of interest.
The measured value can be millivolts, capacitance, reluctance, inductance, frequency or some other property. However, before it can be used by the microprocessor, it must be transformed into a digital count by means of an analog to digital (A / D) converter. In the first table, the microprocessor must rely on some form of equation or table to relate the gross count value of the electrical measure to the real property (PV) of interest, such as temperature, pressure or flow.
The main form of this table is usually set by the manufacturer, but most HART instruments include commands to make field adjustments. This is often referred to as sensor adjustment. The result of the first table is a digital representation of the process variable. When you read the process variable using a communicator, this is the value you see. The second table is strictly a mathematical conversion of the process variable to the equivalent miliamp representation. The range values of the instrument (related to the values of zero and span) are used together with the transfer function to calculate this value. Although the linear transfer function is the most common, pressure transmitters often have a square
Other special instruments may implement common mathematical transformations or user defined break point tables. The output of the second block is a digital representation of the desired instrument output.When you read the loop current using a communicator, this is the value that you see. Many HART instruments support a command which puts the instrument into a fixed output test mode. This overrides the normal output of the second block and substitutes a specified output value.The third box is the output section where the calculated output value is converted to a count value that can be loaded into a digital to analog converter. This produces the actual analog electrical signal. Once again the microprocessor must rely on some internal calibration factors to get the output correct. Adjusting these factors is often referred to as a current loop trim or 4-20mA trim