Measurement Systems in Quality


Measurement system, any of the systems used in the process of associating numbers with physical quantities and phenomena. Although the concept of weights and measures today includes such factors as temperature, luminosity, pressure, and electric current, it once consisted of only four basic measurements: mass (weight), distance or length, area, and volume (liquid or grain measure). The last three are, of course, closely related. Basic to the whole idea of weights and measures are the concepts of uniformity, units, and standards. Uniformity, the essence of any system of weights and measures, requires accurate, reliable standards of mass and length and agreed-on units. A unit is the name of a quantity, such as kilogram or pound. A standard is the physical embodiment of a unit, such as the platinum-iridium cylinder kept by the International Bureau of Weights and Measures at Paris as the standard kilogram. Two types of measurement systems are distinguished historically: an evolutionary system, such as the British Imperial, which grew more or less haphazardly out of custom, and a planned system, such as the International System of Units (SI; Système Internationale d’Unités), in universal use by the world’s scientific community and by most nations.

Measurement Methods

  1. Transfer Tools

    Transfer tools (such as spring calipers) have no reading scale. Jaws on these instruments measure the length, width, or depth in question by positive contact. The  dimension measurement is then transferred to another measurement scale for direct  reading.

  2. Attribute Gages

    Attribute gages are fixed gages which typically are used to make a go/no-go decision. Examples of attribute instruments are master gages, plug gages, contour gages, thread gages, limit length gages, assembly gages, etc. Attribute data indicates only whether a product is good or bad. Attribute gages are quick and easy to use but provide minimal information for production control.

  3. Variable Gages

    Variable measuring instruments provide a physical measured dimension. Examples  of variable instruments are rulers, Vernier calipers, micrometers, depth indicators,  run out indicators, etc. Variable information provides a measure of the extent that  a product is good or bad, relative to specifications. Variable data is often useful for process capability determination and may be monitored via control charts.

  4. Reference/Measuring Surfaces

    A reference surface is the surface of a measuring tool that is fixed. The measuring  surface is movable. Both surfaces must be free from grit or damage, secure to the part and properly aligned for an accurate measurement.

Instrument Selection

The terms measuring tool, instrument, and gage are often used interchangeably. Obviously, the appropriate gage should be used for the required measurement. Listed below are some gage accuracies and applications.

Type of Gage Accuracy Application
Adjustable  snap gages Usually accurate within 10% of the tolerance. Measures diameters on a production basis where an exact measurement is needed.
Air gages Accuracy depends upon the gage design. Measurements of less than 0.000050″ are possible. Used to measure the diameter of a bore or hole. However, other applications are possible.
Automatic  sorting gages Accurate within 0.0001″. Used to sort parts by dimension.
Combination Square Accurate within one degree. Used to make angular checks.
Coordinate  measuring machines Accuracy depends upon the part. Axis accuracies are within 35 millionths and T.l.R. within 0.000005″. Can be used to measure a variety of characteristics, such as contour, taper, radii, roundness, squareness, etc.
Dial bore gages Accurate within 0.0001″ using great care. Used to measure bore diameters, tapers, or out-of- roundness.
Dial indicator Accuracy depends upon the type of indicator. Some measure within 0.0001″. Measures a variety of features such as: flatness, diameter, concentricity, taper, height, etc.
Electronic comparator Accurate from 0.00001″ to 0.000001″. Used where the allowable tolerance is 0.0001″ or less.
Fixed snap gages No set accuracy. Normally used to determine if diameters are within specification.
Flush pin gages Accuracy of about 0.002″. Used for high volume single purpose applications.
Gage blocks Accuracy of the gage block depends upon the grade.  Normally the accuracy is 0.000008″ or better. Gage blocks are best adapted  for precision machining and as a comparison master.
Height verniers Mechanical models measure to 0.0001″.  Some digital models attain 0.00005″. Used to check dimensional tolerances on a surface plate.
Internal  and external  thread gages No exact reading. Will discriminate to a given specification limit. Used for measuring inside and   outside pitch thread diameters.
Micrometer (Inside) Mechanical accuracy is about 0.001″. Some digital models are accurate to 0.00005″. Used for checking large hole diameters.
Micrometer (outside) Mechanical accuracy is about 0.001″. Some digital models are accurate to 0.00005″. Normally used to check diameter or thickness. Special models  can check thread diameters.
Optical  comparator The accuracy can be within 0.0002″. Measures difficult contours and part configurations.
Optical flat Depending on operator skill, accurate to a few millionths of an inch. Used only for very precise tool room work. Best used for checking flatness.
Plug gages Accuracy very good for checking the largest or smallest hole diameter. Checking the diameter of drilled or reamed holes. Will not check for out of roundness.
Precision straight edge Visual 0.10″. With a feeler gage 0.003″. Used to check flatness, waviness or squareness of a face to a reference plane.
Radius & template gages Accuracy is no better than 0.015″. Used to check small radii, and contours.
Ring gages Will only discriminate against diameters larger or smaller than the print specification. Best application is to approximate a mating part in assembly. Will not check for out of roundness.
Split sphere & telescope No better than 0.0005″ using  a micrometer graduated in  0.0001″  Used for measuring small hole diameters.
Steel ruler or scale No better than 0.015″. Used to measure heights, depths, diameters, etc.
Surface plates Flatness expected to be no better than 0.0005″ between any 2 points. Used to measure the overall flatness of an object.
Tapered  parallels Using an accurate micrometer, the accuracy is about 0.0005″. Used to measure bore sizes in low volume applications.
Tool maker’s flat Accuracy is no better than 0.0005″ depending upon the instrument used to measure the height. Used with a surface plate and gage blocks to measure height.
Vernier calipers About 0.001″. Some digital models are accurate to  0.00005″. Used to check diameters and thickness.
Vernier depth gage About 0.001″. Some digital models are accurate to 0.00005″. Used to check depths.

Attribute Screens

Attribute screens are screening tests performed on a sample with the results falling into one of two categories, such as acceptable or not acceptable. Because the screen tests are conducted on either the entire population of items or on a significantly large proportion of the population, the screen test must be of a nondestructive nature. Screening programs have the following characteristics:

  • A clearly defined purpose
  • High sensitivity to the attribute being measured (a low, false negative rate)
  • High specificity to the attribute being measured (a low, false positive rate)
  • Benefits of the program outweigh the costs
  • Measured attributes identify major problems (serious and common)Results lead to useful actions

Common applications of screening tests occur in reliability assessments and in the medical screening of individuals. In reliability assessments, an attribute screen test may be conducted to separate production units that are susceptible to high initial failure rates. This period is also known as the infant mortality period. The test simulates a customer use of the unit, or perhaps an accelerated condition of use. The number of failures, per unit of time, is monitored and the screen test continues until the failure rate has reached an acceptable level. The screen test separates acceptable items from failed items, and an analysis of the failed components is performed to find the cause of the failure. In medical screening, a specific symptom or condition is targeted and members of a defined population are selected for evaluation. Examples of this type of screening include a specific type of cancer or a specific disease. In many cases, the members of the selected population may not be aware that they have the condition being screened. Medical screening tests have the ultimate objective of saving lives.

Tool Care

Measuring instruments are typically expensive and should be treated with care to preserve their accuracy and longevity. Some instruments require storage in a customized case or controlled environment when not in use. Even sturdy hand tools are susceptible to wear and damage. Hardened steel tools require a light film of oil to prevent rusting. Care must be taken in the application of oil since dust particles will cause buildup on the gage’s functional surfaces. Measuring tools must be calibrated on a scheduled basis as well as after any suspected damage.

1. Gage Blocks

 Near the beginning of the 20th century, Carl Johansson of Sweden, developed steel blocks to an accuracy believed impossible by many others at that time. His objective was to establish a measurement standard that not only would duplicate national standards, but also could be used in any shop. He was able to build gage blocks to an accuracy within a few millionths of an inch. When first introduced, gage blocks or “Jo” blocks as they are popularly known in the shop, were a great novelty. Seldom used for measurements, they were kept locked up and were only brought out to impress visitors.
Today gage blocks are used in almost every shop manufacturing a product requiring mechanical inspection. They are used to set a length dimension for a transfer measurement,  and for calibration of a number of other tools.
ANSI/ASME B89.1.9 Precision Inch Gage Blocks for Length Measurement, distinguishes three basic gage block forms – rectangular, square and round. The rectangular and square varieties are in much wider use. Generally, gage blocks are made from high carbon or chromium alloyed steel. Tungsten carbide, chromium carbide, and fused quartz are also used. All gage blocks are manufactured with tight tolerances on flatness, parallelism, and surface smoothness.


  1. Gage blocks should always be handled on the non-polished sides. Blocks should  be cleaned prior to stacking with filtered kerosene, benzene or carbon tetrachloride. A soft clean cloth or chamois should be used. A light residual oil film must remain on blocks for wringing purposes.

    Block stacks are assembled by a wringing process which attaches the blocks by a combination of molecular attraction and the adhesive effect of a very thin oil film. Air between the block boundaries is squeezed out. The sequential steps for the wringing of rectangular blocks is shown below. Light pressure is used throughout the process.


Gage Block Sets


Individual gage blocks may be purchased up to 20″ in size. Naturally, the length tolerance of the gage blocks increases as the size increases. Typical gage block sets vary from 8 to 81 pieces based upon the needed application. The contents of a typical 81 piece set are:
Ten-thousandth blocks (9): 0.1001, 0.1002, …, 0.1009
One-thousandth blocks (49): 0.101, 0.102, 0.149
Fifty-thousandth blocks (19): 0.050, 0.100, 0.950
One inch blocks (4): 1.000, 2.000, 3.000, 4.000

For the purpose of stack protection, some gage manufacturers provide  wear blocks that are either 0.050″ or 0.100″ in thickness.

2. Calipers

Calipers are used to measure length. The length can be an inside dimension, outside dimension, height, or depth. Some calipers are used for only one of these lengths, while other calipers can be used to measure all four types of lengths.
Calipers are generally one of four types:

  • Spring calipers
  • Dial calipers
  • Vernier calipers
  • Digital calipers
    1. Spring Calipers

      Spring calipers are transfer tools that perform a rough measurement of wide, awkward or difficult to reach part locations. These tools usually provide a measurement accuracy of approximately 1/16 of an inch. Although these calipers are referred to as spring calipers, there are different varieties (spring joint, firm joint, lock joint, etc.) which describe the type of mechanical joint that connects the two sides of the unit. A spring caliper measurement is typically transferred to a steel rule by holding the rule vertically on a flat surface. The caliper ends are placed against the rule for the final readings.1

    2. Vernier Calipers

      Vernier calipers use a Vernier scale to indicate the measurement of length. Length, depth and height are variations of the length measurement capability they provide. Resolution of Vernier calipers is often 0.001 inch. Although Vernier calipers are still available, they have been replaced with dial or digital calipers in many applications.

      The Vernier Scale

      Vernier scales are used on a variety of measuring instruments such as height gages, depth gages, inside or outside Vernier calipers and gear tooth Verniers. Except for the digital varieties, readings are made between a Vernier plate and beam scales. By design, some of these scales are vertical and some are horizontal. Shown below is an illustrative example of how a reading is made.

    3. Dial Calipers

      Dial calipers function in the same way as vernier calipers, however the measurement is indicated by a combination of a scale reading to the nearest 0.1 of an inch and a dial indicating the resolution to 0.001 of an inch. The dial hand typically makes one revolution for each 0.1 of an inch of travel of the caliper jaws. Errors in reading the dial calipers often include misreading the scale by 0.1 of an inch or using the calipers in applications which require an accuracy of 0.001 of an inch, which is not realistic for this type of calipers.

    4. Digital Calipers

      Digital calipers use a digital display instead of the dial and scale found in dial calipers. Most digital calipers have the ability to be read in either inches or millimeters, and the zero point can be set at any point along the travel. Display resolutions of 0.0005 of an inch are common. Errors in reading the digital display are greatly minimized, however like the dial calipers, digital calipers are often used in applications which require a different device to attain the required accuracy. For example, some digital calipers have data interface capabilities to send measurement data directly into a computer program. Digital caliper improvements have made them more reliable for use in machine shop conditions including locations where cutting oil and metal chips come in contact with the calipers.

  1. Optical Comparators

    A comparator is a device for comparing a part to a form that represents the desired part contour or dimension. The relationship of the form with the part indicates acceptability. A beam of light is directed upon the part to be inspected, and the resulting shadow is magnified by a lens system, and projected upon a viewing screen by a mirror. The enlarged shadow image can then be inspected and measured easily and quickly by comparing it with a master chart or outline on the viewing screen. To pass inspection, the shadow outline of the object must fall within the predetermined tolerance limits.

  2. Surface Plates

    To make a precise dimensional measurement, there must be a reference plane or starting point. The ideal plane for dimensional measurement should be perfectly flat. Since a perfectly flat reference plane does not exist, a compromise in the form of a surface plate is commonly used. Surface plates are customarily used with accessories like: a toolmaker’s flat, angles, parallels, V blocks and cylindrical gage block stacks. Dimensional measurements
    are taken from the plate up since the plate is the reference surface. Surface plates must possess the following important characteristics:

    • Sufficient strength and rigidity to support the test piece
    •  Sufficient and known accuracy for the measurements required
  3. Micrometers

    Micrometers are commonly used hand-held measuring devices. Micrometers may be purchased with frame sizes from ‘0.5″ to 48″. Normally, the spindle gap and design permits a 1″ reading span. Thus, a 2″ micrometer would allow readings from 1″ to 2″. Most common “mics” have an accuracy of 0.001 of an inch. With the addition of a vernier scale, an accuracy of 0.0001 of an inch can be obtained. Improvements in micrometers have led to “super micrometers” which, with laser attachments and when used in temperature and humidity controlled rooms, are able to make linear measurements to one millionth of an inch. Micrometers consist of a basic C frame with the part measurement occurring between a fixed anvil and a moveable spindle. Measurement readings on a traditional micrometer are made at the barrel and thimble interface. Micrometers may make inside, outside, depth or thread measurements based upon the customization desired. The two primary scales for reading a micrometer are the sleeve scale and the thimble scale. Most micrometers have a 1″ “throat.” All conventional micrometers have 40 markings on the barrel consisting of 0.025″ each. The 0.100″, 0.200″, 0.300″, etc. markings are highlighted. The thimble is graduated into 25 markings of 0.001 ” each. s. Thus, one full revolution of the thimble represents 0.025″.

  4. Ring Gages

    Ring gages are used to check external cylindrical dimensions, and may also be used to check tapered, straight, or threaded dimensions. A pair of rings with hardened bushings are generally used. One bushing has a hole of the minimum tolerance and the other has a hole of the maximum tolerance. Frequently, a pair of ring gages are inserted in a single steel plate for convenience and act as go/no-go gages. Ring gages have the disadvantage of accepting out of round work and taper if the largest diameter is within tolerance. A thread ring gage is used to check male threads. The go ring must enter onto the full length of the threads and the no-go must not exceed three full turns onto the thread to be acceptable. The no-go thread ring will be identified by a groove cut into the outside diameter.

  5. Plug Gages

    Plug gages are generally go/no-go gages, and are used to check internal dimensions. The average plug gage is a hardened and precision ground cylinder about an inch long. The go/no-go set is usually held in a hexagonal holder with the go plug on one end and the no-go plug on the other end. To make it more readily distinguishable, the no-go plug is generally made shorter. The thread plug gage is designed exactly as the plug gage but instead of a smooth cylinder at each end, the ends are threaded. One end is the go member and the other end is the no-go member. If the go member enters the female threads the required length and the no-go does not enter more than three complete revolutions, the threads are deemed acceptable.

  6. Dial Indicators

    Dial indicators are mechanical instruments for measuring distance variations. Most dial indicators amplify a contact point reading by use of an internal gear train mechanism. The standard nomenclature for dial indicator components is shown in the diagram below:1
    The vertical or horizontal displacement of a spindle with a removable contact tip is transferred to a dial face. The measurement is identified via use of an indicating hand. Commonly available indicators have discriminations (smallest graduations) from 0.00002″ to 0.001″ with a wide assortment of measuring ranges. The proper dial must be selected for the length measurement and required discrimination.

  7. Pneumatic Gages

    There are two general types of pneumatic amplification gages in use. One type is actuated by varying air pressure and the other by varying air velocity at constant pressure. Depending upon the amplification and the scale, measurements can be read to millionths of an inch. in the pressure type gage, filtered compressed air divides and flows into opposite sections of a differential pressure meter. Any change in pressure caused by the variation in the sizes of the work pieces being measured is detected by the differential pressure meter. In the flow type of air gage, the velocity of air varies directly with the clearance between the gaging head and the surface being measured.

  8. Interferometry

    The greatest possible accuracy and precision are achieved by using light waves as a basis for measurement. A measurement is accomplished by the interaction of light waves that are 180° out of phase. This phenomenon is known as interference. Interference occurs when two or more beams of monochromatic light of the same wave length are reunited after traveling paths of different lengths. When the light waves pass from a glass medium to an air medium above the surface of the object, a 180° phase change takes place. The reflected light from the surface of the test  object “interferes” with the light waves of incidence and cancels them out. Irregularities are evidenced by alternate dark and light bands.

  9. Laser Designed Gaging

    The use of lasers have been prevalent when the intent of inspection is a very  accurate non-contact measurement. The laser beam is transmitted from one side of the gage to a receiver on the opposite side of the gage. Measurement takes place when the beam is broken by an object and the receiver denotes the dimension of the interference to the laser beam. The laser has many uses in gaging. Automated inspection, fixed gaging, and laser micrometers are just a few examples of the many uses of the laser.

  10. Coordinate Measuring Machines (CMM)

    Coordinate measuring machines are used to verify workpiece dimensions using computer controlled measurements which are taken on three mutually perpendicular axes. Workpieces are placed on a surface plate and a probe is maneuvered to various contact points to send an electronic signal back to the computer that is recording the measurements. CMMs can be driven by the computer to measure complex workpieces and perform automated inspection of complex shapes.

  11. Non-Destructive Testing (NDT) and Evaluation (NDE)

    Non-destructive testing (NDT) and non-destructive evaluation (NDE) techniques, evaluate material properties without impairing the future usefulness of the items being tested. Today, there is a large range of NDT methods available, including ultrasonic, radiography, fluoroscopy, microwave, magnetic particle, liquid penetrant, eddy current, and holography. The advantages of NDT techniques include the use of automation, 100% product testing and the guarantee of internal soundness. However, some NDT results, like X-ray films or ultrasonic echo wave inspection, are open to interpretation and demand considerable skill on the part of the examiner.

  12. Visual Inspection

    One of the most frequent inspection operations is the visual examination of products, parts, and materials. The color, texture, and appearance of a product gives valuable information if inspected by an alert observer. Lighting and inspector comfort are important factors in visual inspection. In this examination, the human . eye is frequently aided by magnifying lenses or other instrumentation. This  technique is sometimes called scanning inspection.

  13. Ultrasonic Testing

    The application of high frequency vibration to the testing of materials is a widely used and important non-destructive testing method. Ultrasonic waves are generated in a transducer and transmitted through a material which may contain a defect. A portion of the waves will strike any defect present and be reflected or “echoed” back to a receiving unit, which converts them into a “spike” or “blip” on a screen. Ultrasonic inspection has also been used in the measurement of dimensional thickness. One useful application is the inspection of hollow wall castings, where  mechanical measurement would be difficult because of part interference. The ultrasonic testing technique is similar to sonar. Sonic energy is transmitted by waves containing alternate, regularly spaced compressions and refractions. Audible human sound is in the 20 to 20,000 Hertz range. For non-destructive testing purposes, the vibration range is from 200,000 to 25,000,000 Hertz. (Where 1 Hertz = 1 cycle per second.)

  14. Magnetic Particle Testing

    Magnetic particle inspection is a non-destructive method of detecting the presence of many types of defects or voids in ferromagnetic metals or alloys. This technique can be used to detect both surface and subsurface defects in any material capable of being magnetized. The first step in magnetic particle testing is to magnetize a part with a high amperage, low voltage electric current. Then fine, steel particles are applied to the surface of the test part. These particles will align themselves with the magnetic field and concentrate at places where magnetic flux lines enter or leave the part. The test part is examined for concentrations of magnetic particles which indicate that discontinuities are present. There are three common methods in which magnetic lines of force can be introduced into a part: longitudinal inside a coil, circular magnetization and circular magnetization using an internal conductor. The selected method will depend upon the configuration of the part and the orientation of the defects of interest. Alternating current (AC) magnetizes the surface layer and is used to discover surface discontinuities. Direct current (DC) gives a more uniform field intensity over the entire section and provides greater sensitivity for the location of subsurface defects. There are two general categories of magnetic particles (wet or dry). which depend upon the carrying agent used.

  15. Liquid Penetrant Testing

    Liquid penetrant inspection is a rapid method for detecting open surface defects in both ferrous and nonferrous materials. It may be effectively used on nonporous metallic and nonmetallic materials. Tests have shown that penetrants can enter material cracks as small as 3,000 angstroms. The size of dye molecules used in fluorescent penetrant inspection are so small that there may be no surface cracks too small for penetration. The factors that contribute to the success of liquid penetrant inspection are the ability of a penetrant to carry a dye into a surface defect and the ability of a developer to contrast that defect by capillary attraction. False positive results may sometimes confuse an inspector. Irregular surfaces or insufficient penetrant removal may indicate non-existent flaws. Penetrants are not successful in locating internal defects.

  16. Eddy Current Testing

    Eddy currents involve the directional flow of electrons under the influence of an electromagnetic field. Nondestructive testing applications require the interaction of eddy currents with a test object. This is achieved by:

    • Measuring the flow of eddy currents in a material having virtually identical conductivity characteristics as the test piece
    • Comparing the eddy current flow in the test piece (which may have defects) with that of the standard

    Eddy currents are permitted to flow in a test object by passing an alternating current through a coil placed near the surface of the test object. Eddy currents will be induced to flow in any part that is an electrical conductor. The induced flow of  electrons produces a secondary electromagnetic field which opposes the primary field produced by the probe coil. This resultant field can be interpreted by electronic instrumentation. See the following diagram:
    1Defect size and location cannot be read directly during eddy current testing. This  test requires a comparative analysis. Part geometry may be a limitation in some test applications and a benefit in others. Eddy current methods can be used to check material thickness, alloy composition, the depth of surface treatments, conductivity, and other properties.

  17. Radiography

    Many internal characteristics of materials can be photographed and inspected by the radiographic process. Radiography is based on the fact that gamma and X-rays will pass through materials at different levels and rates. Therefore, either X-rays or gamma rays can be directed through a test object onto a photographic film and the internal characteristics of the part can be reproduced and analyzed. Because of their ability to penetrate materials and disclose subsurface discontinuities, X-rays and gamma rays have been applied to the internal inspection of forgings, castings, welds, etc. for both metallic and non-metallic products. For proper X-ray examination, adequate standards must be established for evaluating the results. A radiograph can show voids, porosity, inclusions, and cracks if they lie in the proper plane and are sufficiently large. However, radiographic defect images are meaningless, unless good comparison standards are used. A standard, acceptable for one application, may be inadequate for another.

    1. Neutron Radiography

      Neutron radiography is a fairly recent radiographic technique that has useful and unique applications. A neutron is a small atomic particle that can be produced when a material, such as beryllium, is bombarded by alpha particles. Neutrons are uncharged and move through materials unaffected by density. When X-rays pass through an object, they interact with electrons. Therefore, a material with a high electron density, such as lead, is nearly impenetrable. N-rays, on the other hand, are scattered or absorbed by particles in the atomic nuclei rather than by electrons. A metal that is opaque to X-rays is nearly transparent to N-rays. However, materials rich in hydrogen or boron, such as leather, rubber, plastics and many fluids are opaque to N-rays. The methods used to perform neutron radiography are fairly simple. The object is placed in a neutron beam in front of an image detector.

    2. Related Techniques

      There have been new developments in the radiographic field of non-destructive testing, several common recent applications include fluoroscopy, gamma radiography, televised X-ray (TVX), microwave testing, and holographic inspection.

  18. Titration

    A titration is a method of analysis that allows determination of the precise endpoint of a reaction and therefore the precise quantity of reactant in the titration flask. A burette is used to deliver the second reactant to the flask and an indicator or pH Meter is used to detect the endpoint of the reaction. Titrations are used in chemical analysis to determine the quantity of a specific chemical.

  19. Force Measurement Techniques

    A brief description of common force measurement tests is listed below.

    1. Tensile Test

      Tensile strength is the ability of a metal to withstand a pulling apart tension stress. The tensile test is performed by applying a uniaxial load to a test bar and gradually increasing the load until it breaks. The load is then measured against the elongation using an extensometer. The tensile data may be analyzed using a stress-strain curve.

    2. Shear Test

      Shear strength is the ability to resist a “sliding past” type of action when parallel, but, slightly off-axis, forces are applied. Shear can be applied in either tension or compression.

    3. Compression Test

      Compression is the result of forces pushing toward each other. The compression test is run much like the tensile test. The specimen is placed in a testing machine, a load is applied and the deformation is recorded. A compressive stress-strain curve can be drawn from the data.

    4. Fatigue Test

      Fatigue strength is the ability of material to take repeated loading. There are several types of fatigue testing machines. In all of them, the number of cycles are counted until a failure occurs and the stress used to cause the failure is determined.

  20. Hardness Measurement

    Hardness testing (which measures the resistance of any material against penetration) is performed by creating an indentation on the surface of a material with a hard ball, a diamond pyramid, or cone and then measuring the depth of penetration. Hardness testing is often categorized as a non-destructive test since the indentation is small and may not affect the future usefulness of the material.
    Listed below are the most commonly used techniques for hardness measurements.

    1. Rockwell Hardness Testing

      The most popular and widely used of all the hardness testers is the Rockwell tester. This type of tester uses two loads to perform the actual hardness test. Rockwell machines may be manual or automatic. The Rockwell hardness value is based on the depth of penetration with the value automatically calculated and directly read off the machine scale. This eliminates any potential human error. At least three readings should be taken and the hardness value averaged. There are approximately 30 different Rockwell hardness scales, with the most common being the HRB and the HRC, when used in testing metals.

    2. Rockwell Superficial Hardness Testing

      The superficial hardness tester is used to test hard-thin materials. It tests closer to the surface and can measure case-hardened surfaces. The testing procedures are identical to regular Rockwell testing. There are approximately 15 different superficial Rockwell hardness scales.

    3. Brinell Hardness Testing

      The Brinell hardness testing method is primarily used for bulk hardness of heavy sections of softer steels and metals. Compared to other hardness tests the imprint left by the Brinell test is relatively large. This type of deformation is more conducive to testing porous materials such as castings and forgings. Thin samples cannot be tested using this method. Since a large force would be required to make a measurable dent on a very hard surface, the Brinell method is generally restricted to softer metals. The HBW (tungsten carbide ball) and HBS (steel ball) have replaced the prior BHN (Brinell Hardness Number) scale.

    4. Vickers Hardness Testing

       Vickers hardness testing uses a square-based pyramid with loads of 1 to 120 kg. The surface should be as smooth, flat, and clean as possible. The test piece should be placed horizontally on the anvil before testing. The angle of the diamond penetrator should be approximately 136 degrees. Vickers hardness is also done as a microhardness test, with loads in the range of 25 g to 1 kg. The Vickers microhardness test is similar to the Knoop microhardness test, and is done on flat, polished surfaces. The units are HV, previously DPH (Diamond Pyramidal Hardness).

    5. Knoop Hardness Testing

      The Knoop is a microhardness testing method used for testing surface hardness of very small or thin samples. A sharp elongated diamond is used as the penetrator with a 7:1 ratio of major to minor diagonals. Surfaces must be flat, ground very fine and square to the axis of the load. The sample must be very clean as even small dust particles can interfere. Loads may go as low as 25 grams. The Knoop hardness testing method is used for extremely thin materials like coatings, films, and foils. It is basically used for testing in the research lab. The units are HK.

    6. Mohs Hardness Testing

      In 1824, an Austrian mineralogist by the name of F. Mohs chose ten minerals of  varying hardness and developed a comparison scale. This scratch test was probably the first hardness testing method developed. It is very crude and fast, and is based on the hardness of ten minerals. The softest mineral on the MOHS scale is talc and the hardest is diamond.

    7. File Hardness Testing

      File hardness is a version of the scratch testing method where a metal sample is scraped with a 1/4″ diameter double cut round file. If the file “bites” into the material, the material is “not file hard.” If there is no mark, the material is “file hard.” This is a very easy way for inspectors to determine if the material has been hardness treated.

    8. Sonodur Hardness Testing Method

      The Sonodur is one of the newer test methods and uses the natural resonant frequency of metal as a basis of measurement. Hardness of a material effects this frequency, and therefore, can be measured. This method is considered to be very accurate.

    9. Shore Scleroscope Hardness Testing

      The Shore Scleroscope is a dynamic hardness test that uses a material’s absorption factor and measures the elastic resistance to penetration. It is unlike the other test methods in that there is no penetration. In the test, a hammer is dropped and the bounce is determined to be directly proportional to the hardness of the material. The Shore method has a negligible indention on the sample surface. A variety of materials, shapes, and sizes can be tested, and the equipment is very portable.


  1. Torque Measurement

    Torque measurement is required when the product is held together by nuts and bolts. The wrong torque can result in the assembly failing due to a number of problems. Parts may not be assembled securely enough for the unit to function properly or threads may be stripped because the torque was too high, causing the unit to fail. Torque is described as a force producing rotation about an axis. The formula for torque is:

    Torque = Force x Distance

    For example, a force of 2 pounds applied at a distance of 3 feet equals 6 lbf-ft.
    Torque is measured by a torque wrench. There are many types of torque wrenches. However, the two types most commonly used are the flexible beam type, and the rigid frame type. Torque wrenches may be preset to the desired torque. The wrench will either make a distinct “clicking” sound or “slip” when the desired torque is achieved.

  2.  Impact Test

    Impact strength is a material’s ability to withstand shock. Tests such as Charpy and Izod use notched samples which are struck with a blow from a calibrated pendulum. The major difference between the two are the way the bar is anchored and the speed in which the pendulum strikes the bar.

  3. The Steel Rule

    The steel rule is a widely used factory measuring tool for direct length measurement. Steel rules and tapes are available in different degrees of accuracy and are typically graduated on both edges. See the drawing below.

    The fine divisions on a steel rule (one thirty-seconds on the one above) establish its discrimination which are typically 1/32, 1/64, or 1/100 of an inch. Obviously, measurements of 0.010″ or finer should be performed with other tools (such as a digital caliper).


Metrology is the science of measurement. The word metrology derives from two Greek words: matron (meaning measure) and logos (meaning logic). With today’s sophisticated industrial climate, the measurement and control of products and processes are critical to the total quality effort. Metrology encompasses the following key elements:

  • The establishment of measurement standards that are both internationally accepted and definable
  • The use of measuring equipment to correlate the extent that product and process data conform to specifications (expressed in recognizable measurement standard terms)
  • The regular calibration of measuring equipment, traceable to established international standards

Units of Measurement

There are three major international systems of measurement: English, Metric, and the System International D‘unites (or Sl).  The metric and SI systems are decimal-based, the units and their multiples are a related to each other by factors of 10. The English system, although logical to us, has numerous relic defined measurement units that make conversions difficult. Most of the world is now committed to the adoption of the SI system. The SI system was established in 1968 and the  transition is occurring very slowly. The final authority for standards rests with the internationally based system of units. This system classifies measurements into seven distinct categories:

  1. Length (meter). The meter is the length of the path traveled by light in vacuum during a time interval of 1/299,792,458 of a second. The speed of light is fixed at  186,282.3976 statute miles per second, with exactly 2.540 centimeters in one inch.
  2.  Time (second). The second is defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium – 133 atom.
  3. Mass (kilogram). The standard unit of mass, the kilogram is equal to the mass of the international prototype which is a cylinder of platinum iridium alloy kept by the International Bureau of Weights and Measures at Sevres (near Paris, France). A duplicate, in the custody of the National Institute of Standards and Technology, serves as the standard for the United States. This is the only base unit still defined by an artifact.
  4.  Electric current (ampere). The ampere is a constant current that, if maintained in two straight parallel conductors of infinite length, of negligible circular cross section, and placed one meter apart in vacuum, would produce between these  conductors a force equal to 2 x 10” Newtons per meter of length
  5. Light (candela). The candela is defined as the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 x 1012 hertz and has a radiant intensity in that direction of 1/683 of a watt per steradian.
  6.  Amount of substance (mole). The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12. The elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles or specified groups  of such particles.
  7.  Temperature (Kelvin). The Kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. It follows from this definition that the temperature of the triple point of water is 273.16 K (0.01 C). The freezing point of water at standard atmospheric pressure is approximately 0.01 K below the triple point of water. The relationship of Kelvin, Celsius, and Fahrenheit is shown below.


Enterprise Measurement Systems

Enterprise Measurement systems relates  to items that can be directly or indirectly measured or counted. Often overlooked are those key enterprises measures that can be service-oriented and/or transactional in nature. These measures are often expressed in percentages or presented to management in time line or graphical formats.
Enterprise performance can be measured and presented by using:

  • Automatic counters
  • Computer generated reports
  • Internal and external audits
  • Supplier assessments
  • Management reports
  • Internal and external surveys
  • A variety of feedback reports

The following is a non-exclusive list of items that can be measured:

  1. Suppliers

    • Number of product deviations
    • Percentage of on-time deliveries
    • Percentage of early deliveries
    • Shipment costs per unit
    • Shipment costs per time interval
    • Percentage of compliance to specifications
    • Current unit cost compared to historical unit cost
    • Dollars rejected versus dollars purchased
    • Timeliness of supplier technical assistance
  2. Marketing/Sales

    • Sales growth per time period
    • Percentage of market compared to the competition
    • Dollar amount of sales/month
    • Amount of an average transaction
    • Time spent by an average customer on website
    • Effectiveness of sales events
    • Sales dollars per marketing dollar
  3. External Customer Satisfaction

    • A weighted comparison with competitors
    • Perceived value as measured by the customer
    •  Ranking of product/service satisfaction
    • Evaluation of technical competency
    • Percentage of retained customers
  4. Internal Customer Satisfaction .

    •  Employee rating of company satisfaction
    • Rating of job satisfaction
    • An indication of training effectiveness
    • An evaluation of advancement fairness
    • Feedback reaction to major policies and procedures
    • Knowledge of company goals and progress to reach them
  5. Research and Development

    • Number of development projects in progress
    • Percentage of projects meeting budget
    • Number of projects behind schedule
    • Development expenses versus sales income
    • Reliability of design change requests
  6. Engineering

    • Evaluation of product performance
    • Number of corrective action requests
    • Percentage of closed corrective action requests
    • An assessment of measurement control
    • Availability of internal technical assistance
  7. Manufacturing

    • Key machine and process capabilities
    • Machine downtime percentages
    • Average cycle times (key product lines)
    • Measurement of housekeeping control
    • Adequacy of operator training

Measurement Error

The total variability in a product includes the variability of the measurement process:
σ2Total = σ2Process + σ2Measurement
The error of a measuring instrument is the indication of a measuring instrument  minus the true value.

σ2Error = σ2Measurement  – σ2True
or σ2Measurement = σ2True + σ2Error

The precision of measurement can best be improved through the correction of the causes of variation in the measurement process. However, it is frequently desirable to estimate the confidence interval for the mean of measurements which includes the measurement error variation. The confidence interval for the mean of these measurements is reduced by obtaining multiple readings according to the central limit theorem using the following: 1

The formula states that halving the error of measurement requires quadrupling the number of measurements.

There are many reasons that a measuring instrument may yield erroneous variation, including the following categories:

  1. Operator Variation: This error occurs when the operator of a measuring instrument obtains measurements utilizing the same equipment on the same standards and a pattern of variation
  2. Operator to Operator Variation: This error occurs when two operators of a measuring  instrument obtain measurements utilizing the same equipment on the same standards and a pattern of variation occurs between the operators about the bias between them.
  3. Equipment Variation: This error occurs when sources of variation within the equipment surface through measurement studies. The reasons for this variation are numerous. As an example, the equipment may experience an occurrence called drift. Drift is the slow change of a measurement characteristic over time.
  4. Material Variation: This error occurs when the testing of a sample destroys or changes the sample prohibiting retesting. This same scenario would also extend to the standard being used.
  5. Procedural Variation: This error occurs when there are two or more methods to obtain a measurement resulting in multiple results.
  6. Software Variation:  With software generated measurement programs, variation in the software formulas may result in errors, even after identical inputs.
  7. Laboratory to Laboratory Variation: This error is common when procedures for measurement vary  from laboratory-to-laboratory. The advent of standardized testing such as the ASTM procedures have been developed to correct this type of error.


Throughout history, man has devised standards to support those common measurement tools used in trade between various parties. This standardization allows the world to establish measurement systems for use by all industries. The science of calibration is the maintenance of the accuracy of measurement standards as they deteriorate over use and time. Calibration is the comparison of a measurement standard or instrument of known accuracy with another standard or instrument to detect, correlate, report or eliminate by adjustment, any variation in the accuracy of the item being compared. The elimination of measurement error is the primary goal of calibration systems.

Calibration Interval

It is generally accepted that the interval of calibration of measuring equipment be based on stability, purpose, and degree of usage. The following basic calibration principles must be applied.

  1. The stability of a measurement instrument refers to the ability of a measuring instrument to consistently maintain its metrological characteristics over time. This could be determined by developing records of calibration that would record the “as found” condition as well as the frequency, inspection authority, and the instrument identification code.
  2. The purpose or function of the measurement instrument is important. Whether it were to be used to measure door stops or nuclear reactor cores would weigh heavily on our calibration frequency decision. In general, the critical applications will increase frequency and minor applications would decrease frequency.
  3. The degree of usage refers to the environment as a whole. Thought must be given as to how often an instrument is utilized and to what environmental conditions an instrument is exposed. Contamination, heat, abuse, etc. are all  valid considerations.

Intervals should be shortened if previous calibration records and equipment usage indicate this need. The interval can be lengthened if the results of prior calibrations show that accuracy will not be sacrificed. Intervals of calibrations are not always stated in standard lengths of time such as annually, bi-annually, quarterly, etc. A method gaining recent popularity is the verification methodology. This technique requires that very short verification frequencies be established for instruments placed into the system i.e. shifts, days, weeks, etc. The philosophy behind this system is that a complete calibration will be performed when the measuring instrument cannot be verified to the known standard. This system, when utilized properly, reduces the costs associated with unnecessary scheduled cyclic calibrations. Two key points must be made about this system.

  1. The measuring instrument must be compared to more than one standard to take into consideration the full range of use. A micrometer utilized to measure metal thickness from 0.030″ to 0.500″ should be verified with measurement standards of at least 0.030″ and 0.500″.
  2. This system is intended for those measuring instruments that are widespread throughout a facility and can be replaced immediately upon the discovery of an out of calibration condition.

Measuring and test equipment should be traceable to records that indicate the date of the last calibration, by whom it was calibrated and when the next calibration is due. Coding is sometimes used.

Calibration Standards

Any system of measurement must be based on fundamental units that are virtually unchangeable. Today, a master international kilogram is maintained in France. In the SI system, most of the fundamental units are defined in terms of natural phenomena that are unchangeable. This recognized true value is called the standard.

In all industrialized countries, there exists a body like Bureau  of Indian Standards (BIS) whose functions include the construction and maintenance of “primary reference standards.” These standards consist of copies of the international kilogram plus measuring systems which are responsive to the definitions of the fundamental units and to the derived units of the SI table. In addition, professional societies (e.g., American Society for Testing and Materials) have evolved standardized test methods for measuring many hundreds of quality characteristics not listed in the SI tables. These standard test methods describe the test conditions, equipment, procedure, etc. to be followed. The various standards bureaus and laboratories then develop primary reference standards which embody the units of measure corresponding to these standard test methods. In practice, it is not feasible for Bureau of Indian Standards (BIS) to calibrate and certify the accuracy of the enormous volume of test equipment in use. Instead, resort is made to a hierarchy of secondary standards and laboratories together with a system of documented certifications of accuracy. When a measurement of a characteristics is made, the dimension being measured is compared to a standard. The standard may be a yardstick, a pair of calipers, or even a set of gage blocks, but they all represent some criteria against which an object is compared ultimately to national and international standards. Linear standards are easy to define and describe if they are divided into functional  levels. There are five levels in which linear standards are usually described.

  • Working Level: This level includes gages used at the work center.
  • Calibration Standards: These are standards to which working level standards are calibrated.
  • Functional Standards: This level of standards is used only in the metrology laboratory of the company for measuring precision work and calibrating other standards.
  • Reference Standard: These standards are certified directly to the NIST and are used in lieu of national standards.
  • National and International Standards: This is the final authority of measurement to which all standards are traceable.

Since the continuous use of national standards is neither feasible nor possible, other standards are developed for various levels of functional utilization. National standards are taken as the central authority for measurement accuracy, and all levels of working standards are traceable to this “grand” standard. The downward direction of this traceability is shown as follows:

  1.  National bodies like Bureau of Indian Standards (BIS)
  2. Standards Laboratory
  3. Metrology Laboratory
  4. Quality Control System (Inspection Department)
  5. Work Center

The calibration of measuring instruments is necessary to maintain accuracy, but does not necessarily increase precision. Precision most generally stays constant over the working range of the instrument.

Introduction to ISO 10012 Standards:

ISO 10012-1 :2003. Quality assurance requirements for measuring equipment – Part 1: Metrological confirmation system for measuring equipment contains quality assurance requirements to ensure that measurements are made with intended accuracy. It contains guidance on the implementation of the requirements and specifies the main features of the confirmation system. It applies to measurement equipment used in the demonstration of conformance with a specification, not to other measuring equipment, records of measurement, or competence of personnel. ISO 10012 applies to testing laboratories, including those providing a calibration service. It includes laboratories operating a quality system in accordance with ISO/IEC 10725. It also covers those who must meet the requirements of ISO 9001. An integral part of the quality system is the documentation of the control of inspection, measurement, and test equipment. It must be specific in terms of which items of equipment are subject to the provisions of ISO 10012, in terms of the allocation of responsibilities, and in terms of the actions to be taken. Objective evidence must be available to validate that the required accuracy is achieved. The following are basic summaries of what must be accomplished to meet the requirements for a measurement quality system by ISO (and many other) standards.

  • All measuring equipment must be identified, controlled, and calibrated and records of the calibration and traceability to national standards must be kept.
  • The system for evaluating measuring equipment to meet the required sensitivity, accuracy, and reliability must be defined in written procedures.
  • The calibration system must be evaluated on a periodic basis by internal audits and by management reviews.
  • The actions involved with the entire calibration system must be planned. This planning must consider management system analysis.
  • The uncertainty of measurement must be determined, which generally involves gage repeatability and reproducibility and other statistical methods.
  •  The methods and actions used to confirm the measuring equipment and devices must be documented.
  • Records must be kept on the methods used to calibrate measuring and test equipment and the retention time for these records must be specified.
  • Suitable procedures must be in place to ensure that nonconforming measuring equipment is not used.
  • A labeling system must be in place that shows the unique identification of each piece of measuring equipment or device and its status.
  • The frequency of recalibration for each measuring device must be established, documented, and be based upon the type of equipment and severity of wear.
  • Where adjustments may be made that may logically go undetected, sealing of the adjusting devices or case is required.
  • Procedures must define controls that will be followed when any outside source is used regarding the calibration or supply of measuring equipment.
  • Calibrations must be traceable to national standards. If no national standard is available, the method of establishing and maintaining the standard must be documented.
  • Measuring equipment will be handled, transported and stored according to  established procedures in order to prevent misuse, damage and changes in  functional characteristics.
  • Where uncertainties accumulate, the method of calculation of the uncertainty must be specified in procedures for each case.
  • Gages, measuring equipment, and test equipment will be used, calibrated, and stored in conditions that ensure the stability of the equipment. Ambient environmental conditions must be maintained.
  • Documented procedures are required for the qualifications and training of personnel that make measurement or test determinations.

 Back to Home Page

If you need assistance or have any doubt and need to ask any question  contact us at: You can also contribute to this discussion and we shall be very happy to publish them in this blog. Your comment and suggestion is also welcome.

One thought on “Measurement Systems in Quality

  1. Hmm… you know what? I totally agree with you when you said that ensuring the precision of our products’ dimensional measurement is indeed crucial in producing good quality control outcome. My boss keeps asking me lately about the need to improve the technical services that our company provides. I guess it’s time for us to refer to an expert regarding this matter so we can satisfy more customers soon.

Leave a Reply