Basic Principles of Engineering Metrology

Basic Principles of Engineering Metrology


Introduction

In this article I will explain the important principles in metrology and measurement systems analysis (MSA). Metrology is the science of measurement and is important in areas such as science, manufacturing and trade. Metrology enables us to know the accuracy of measurements and to ensure common standards are used.

In manufacturing it means we can make parts which fit together and perform as they were designed to.

The metrology principles introduced on this page include:

  1. Uncertainty of measurements
  2. Confidence in measurements
  3. Calibration and traceability
  4. Measurement systems analysis (MSA)
  5. Accuracy, precision and trueness
  6. Gage R&R studies
  7. Decision rules

Uncertainty of Measurement

The first thing you should understand about metrology is that no measurement is exact or certain. If I was to show you a bolt and ask how long it is you might say, “It’s about 100 mm”. The use of the word about implies there is some uncertainty in your estimate.

All measurements have uncertainty; a measurement is in fact just an estimate, it might be a much better estimate than the one you can make just by looking at something, but a measurement can never tell you the exact value. In metrology we must always consider this uncertainty when making a measurement.

Confidence in Measurements

Related to uncertainty is the concept of confidence. Thinking again about looking at a bolt and estimating the length, we might say “it’s about 100 mm give or take 5 mm”. This is assigning limits to our uncertainty.

We would be more confident in saying that the bolt is 100 mm plus or minus 10 mm than we would be in saying it is within 5 mm. The larger the range of uncertainty we assign, the higher our confidence becomes that it encompasses the “true” value.

I could look at the bolt and say I’m 70% sure that it’s 100 mm give or take 5 mm and 90% sure that it is within +/- 10 mm. The uncertainty of my estimate is +/- 10 mm at a confidence level of 90%.

Confidence in Measurements

In metrology we must be very clear about the level of our uncertainty at a given confidence level. We can carry out an uncertainty evaluation which includes analysis and experiments to determine the uncertainty of a measurement.

Most uncertainties follow the normal distribution. For a normal distribution we would have a 68% confidence the true value is within 1 standard deviation of the measured value, a 95% confidence it is within 2 standard deviations and a 99.7% confidence it is within 3 standard deviations

Calibration and Traceability in Metrology

When parts are manufactured the machines are set and the finished parts are checked using measurements. By using the same standards of measurement we know that a nut made in China will fit to a bolt made in the USA. Calibration is the way that the standards are transferred from one country to another, and from one instrument to another. The primary reference standards for the SI units of measurement are held in France and each country compares their national standards against these. Calibration lab’s compare their standards against the national standards and then use these to calibrate instruments.

The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system, and is the most widely used system of measurement. ... The last new derived unit was defined in 1999.

Calibration simply means comparing one measurement with another. For example when a ruler is made it might be compared with a reference ruler to determine where the markings are positioned. This is a simple calibration. The markings will not be at exactly the same positions as the ones on the reference ruler; there will be some uncertainty in the calibration process. The reference ruler itself will also not have markings at exactly the right positions since there was some uncertainty when it was calibrated. The ruler has uncertainty from both the reference standard used to calibrate it and from the calibration process, therefore an instrument must always have a greater uncertainty than the instrument which was used to calibrate it.

When an instrument is calibrated the uncertainty of the calibration should always be reviewed & evaluated as part of the calibration process.

A traceable measurement is one which has an unbroken chain of calibrations going back to the primary reference standard, with uncertainties calculated for each calibration.

Uncertainty of measurement is inherited down the chain of calibrations so that uncertainty increases the further we get from the primary reference standard. Traceability from the primary reference, through accredited metrology laboratories and on to industrial metrology departments ensures that we are all working to common standards and know the uncertainty of measurements.

The accuracy of all measuring devices degrade over time. This is typically caused by normal wear and tear. However, changes in accuracy can also be caused by electric or mechanical shock or a hazardous manufacturing environment (oils, metal chips etc.). Depending on the type of the instrument and the environment in which it is being used, it may degrade very quickly or over a long period of time. The bottom line is that, calibration improves the accuracy of the measuring device. Accurate measuring devices improve product quality.

Calibration of your measuring instruments has two objectives. It checks the accuracy of the instrument and it determines the traceability of the measurement. In practice, calibration also includes repair of the device if it is out of calibration. A report is provided by the calibration expert, which shows the error in measurements with the measuring device before and after the calibration.

Therefore the role of our Calibration Engineer is critical, he or she must review & compare each gauge certification from the previous to ensure no inaccuracies or irregularities within the results are present. Simply receiving a calibration certification & uploading it into our system is not acceptable.

Measurement systems analysis (MSA) 

Characteristics of a Variable Measurement System.

Measurement system analysis (MSA) uses scientific tools to determine the amount of variation contributed by the measurement system. It is an objective method to assess the validity of a measurement system & minimize the factors contributing to process variation that is actual stemming from the measurement system. The steps below are generally followed with a goal of obtaining acceptance for each of the five criteria

1.    Resolution

2.    Accuracy / Bias

3.    Linearity

4.    Stability

5.    Precision

A Measurement System is the combination of gauges, hardware, software, methods, personnel, and training involved in obtaining measurements.

Standard Terms in MSA

A Gauge is any device used to obtain measurements.

A Gauge R&R typically refers to a fairly simple procedure used to quantify the "gauge error" and the "operator error".

A Measurement System Analysis involves quantifying and understanding all of the characteristics of the measurement system.

Why Do MSA?

  1. To evaluate the measurement variation in a manufacturing process.
  2. To compare the measurement variation with the engineering tolerance.
  3. To compare the measurement variation between gauges and /or methods.
  4. To facilitate gauge planning and purchasing decisions.
  5. To reduce the measurement system variation.

Gauge Repeatability and Reproducibility

Repeatability and reproducibility are two measures of precision. Repeatability measures how the results of a measurement vary when the measurement is repeated under the same conditions and within a short period of time.

Reproducibility measures how the results of a measurement vary when the measurement is repeated under changed conditions and over long periods of time.

Gauge Repeatability and Reproducibility (GRR) studies are experiments in which different quantities are each measured multiple times by different operators in order to understand the precision of a measurement process.

Accuracy and Precision

Accuracy (Bias) is the difference between the average of the observed measurements and know standard or master (Reference Value)

Uncertainty of measurement is the doubt that exists about the result of any measurement. You might think that well-made gauges, clocks and thermometers etc should be trustworthy, and give the right answers. But for every measurement - even the most careful - there is always a margin of doubt.

Every measurement has an uncertainty. When measuring something, this uncertainty should be smaller than the smallest place holder you are wanting to be accurate to. Not knowing the uncertainty in your measurement makes it useless.

• Depending on what you’re doing (and how much £/€/$ you have) depends on how accurate you need/want to be.

                                                                                                            

Type A and Type B Sources of Uncertainty

Sources of uncertainty are classified as Type A if they are estimated by statistical analysis of repeated measurements or Type B if they are estimated using any other available information.

For example to find the repeatability of an instrument we could simply measure the same quantity 20 times and analyse the different results we got, this would be a Type A uncertainty.

To find the uncertainty in our calibration reference standard by this method would be impractical, for this we simply look up the uncertainty on it’s calibration certificate, this is a Type B uncertainty.

Some other typical sources of uncertainty

Uncertainty of measurement can arise from many sources. We tend to focus on dimensional measurements but the principles of uncertainty evaluation and many of the sources of uncertainty can be applied to any measurement for example time, temperature, mass etc.

Some sources of uncertainty which found in all virtually all traceable measurements are the uncertainty of the reference standard used for calibration, the repeatability of the calibration process and the repeatability of the actual measurement. Environmental uncertainties such as the temperature will be significant sources of uncertainty for many measurements. Resolution or rounding may not be a significant source of uncertainty for digital instruments reading out to many decimal places beyond the repeatability of the system. Alignment is often a major source of uncertainty for dimensional measurements.

Alignment Errors

Alignment is a common source of uncertainty in dimensional measurement. For example when a measurement of the distance between two parallel surfaces is made it should be perpendicular to the surfaces. Any angular deviation from the perpendicular measurement path will result in a cosine error in which the actual distance is the measured distance multiplied by the cosine of the angular deviation.

Cosine Error is a Common Source of Uncertainty of Measurement

Abbe Error is a result from alignment of viewing angles it results from alignment of machine axes. The distance between the axis along which an object is being measured and the axis of the instruments measurement scale is known as the Abbe Offset. If the distance along the object is not transferred to the distance along the scale in a direction perpendicular to the scale then this will result in an error. The size of this error will be the tangent of the angular error multiplied by the Abbe Offset.

Instruments such as Vernier callipers are susceptible to Abbe Error as the measurement scale is not co-axial with the object being measured. Micrometers are not susceptible to Abbe Error.

An Instrument is Susceptible to Abbe Error if the Measurement Scale is not Co-Axial with the Axis of Measurement 

Temperature

Temperature variations effect measurements in a number of ways:-

  • Thermal expansion of the object being measured and of the instrument used to measure it
  • For interferometric measurements changes in the refractive index
  • For optical measurements which depend on light following a straight line path temperature gradients will cause refraction leading to bending of the light and therefore distortions

Calibration

Any errors in the reference standard used to calibrate a measurement instrument are transferred during calibration. Instruments therefore inherit uncertainty from their calibration standard. The actual process of calibration is also not perfectly repeatable; therefore additional uncertainty is introduced through the calibration process.

If calibration has been carried out by an accredited calibration lab then an uncertainty will be given on the calibration certificate. This is not the uncertainty for measurements made using the instrument; it is simply the component of uncertainty due to calibration. This point is often overlooked.

When carrying out a calibration a complete uncertainty evaluation must be carried out for the calibration process. The combined uncertainty for the calibration then becomes a component of uncertainty for measurements taken using the instrument

Resolution

Uncertainty of measurement due to resolution is a result of rounding errors. For many digital instruments the readout resolution is many times smaller than the actual instrument uncertainty. In such cases rounding errors due the instrument resolution are insignificant.

For more traditional instruments resolution is often a significant source of uncertainty. The maximum possible error due to rounding is half of the resolution. For example when measuring with a ruler which has a resolution of 1 mm the rounding error will be +/- 0.5 mm which has a rectangular distribution.

  • The number of decimal places that can be measured by the system.
  • The ability to detect small changes in the characteristic being measured.
  • Is the gage capable of dividing measurements into an enough number of “data

categories” over the range of the part tolerance?

Ten to One or One to Ten Rule

For gauge performance, we apply the 10-to-1 rule:

If your tolerance is ±0.0005 inch, you need a gage with a performance rating of at least 10 times that, or within one-tenth (±0.00005 inch). ... Typically, the rule of thumb for selecting a master has been to choose one whose tolerance is 10 percent of the part tolerance.

Decision Rules for Proving Conformance

If measurements are uncertain how can they prove anything? For example does a measurement prove that a part is out of tolerance? There are some simple rules which allow us to state, at a given statistical confidence level, whether a measurement proves or disproves conformance. It is also possible that the result may be inconclusive in which case we may wish to make further measurements with reduced uncertainty.

The tolerance for a dimension defines an upper specification limit (USL) and a lower specification limit (LSL). For example if a dimension is specified as 10 mm +/- 0.1 mm then the USL is 10.1 mm and the LSL is 9.9 mm. In order to prove conformance, at a given confidence level, we must add the expanded uncertainty (U), at that confidence level, to the LSL and subtract the expanded uncertainty from the USL.

Normal accuracy on even the best CMM's is only about .000050”. Then you have to bring repeatability into question. I don't think anyone in their right mind would attempt a GRR on .000001” measurement we see on laser mics & some air gauges. Someone once asked me when I first started in this business do you even know what a tenth is?

So having been in this business and around CMM's, Lasers, Optical measuring devices and various others gauges for the past 30 years I ask the question to you?

Do you have any Idea what a millionth of an inch is? And How difficult it would be to measure it? I could breathe on a piece of metal after a cup of coffee and change the dimension more than that.

Laser, yes in the right environment; air gauge, not a chance. The very best I've been able to make air gauges work in the very best of conditions is 1/2 a tenth (50 millionths). And even then the regulator was not up to the job and maintaining the air temperature, humidity, etc was a nightmare. Too many variables. And then they are only as good as the masters. What grade is the master: 0.000001"?

We are evermore shifting towards millionth measurement faster than ever before. With the likelihood of increasingly more technical and higher-precision products make the need for high-resolution precision measurement ever more profound. 

With typical measurement applications, much of our attention is focused on the gauge itself: As long as the instrument is designed to the required degree of accuracy and maintained properly, we can usually get by, even at the “tenths” level.

However, when trying to measure tolerances of 100, 50 or even 30 “millionths” (microinches) we must shift our focus to the measurement process and the environment in which it takes place. Where uncertainty, temperature and cleanliness & other MSA features were formerly somewhat abstract issues, they now become essential concerns.  

All Micro-inch gauging therefore must be performed in a controlled environment: a special room that in thermally insulated from the shop floor with the minimal amount of exposure and through traffic to none essential personnel. Temperatures should be kept as close to 68°F as possible, and changes must not exceed 2°F per hour. When high precision parts comes in from the shop, they should sit for several hours on a heat sink (large steel plate) to bring it into equilibrium with the master and the gauge before it is measured. Even with all these precautions, the gauge should be mastered frequently by conducting daily Stability Studies.  

To view or add a comment, sign in

More articles by Andrew Lee

  • labours lost Support from the Muslim community

    It has been reported in the media that a high percentage of the Muslim community are not going to vote for Keir Starmer…

  • What is Six Sigma?

    What is Six Sigma? Six Sigma is designed to create improvement on all processes by working to identify defects and…

  • Dyslexic thinking

    The British spy agency GCHQ and weapons manufacturer BAE Systems have issued an appeal to attract more neurodiverse…

  • A Failed Government

    Economy Well, this week the Bank of England's Monetary Policy Committee (MPC) which sets monetary policy to meet the 2%…

  • Difference between good and bad management

    Put simply, what separates a good manager from a bad one, is that the former puts efforts to build a work culture that…

    1 Comment
  • Is Your Workplace Dysfunctional?

    Here Are The 5 Types of Toxic Cultures Toxic workplace cultures are rife with hostility, cliques, gossip, mistrust, and…

  • What is a Standard Operating Procedure (SOP)?

    Once you have implemented a robust training & development system you need to start training your employees. But what do…

  • Training & Development

    Since been made redundant I went in to do some consultancy work to which was very rewarding and enlightening. Having…

  • What Now for Dizzy Liz?

    Well, after this morning’s statement from the new Chancellor you have got to ask yourself “what the hell is going on”…

  • Quality is defined by its purpose

    I use to work for a hydraulic company which there quality control systems and documentation were written to control its…

Others also viewed

Explore content categories