Testing the testers can be simple, says Anritsu

AnritsuWho should calibrate measurement instruments? When? And how?

Users’ questions to do with calibration are remarkably easy to answer – provided you do not think about them too deeply.

But as this article will show, there are layers of complexity underlying the issue of calibration, and they merit concentrated examination by any user who has a commercial requirement for traceable results of a proven accuracy.

It is not surprising that decisions about calibration frequently attract no more than superficial attention.

The purchase of a measurement instrument is normally a large, one-off capital expenditure. Large capital expenditures are carefully considered and subject to intense management oversight, for financial reasons.

Not surprisingly, relatively small operating expenditures, such as calibration, appear comparatively unimportant, and so much less attention is typically given to them.

From an operational point of view, calibration is a nuisance. Owners of test instruments tend to have the same attitude towards calibration as owners of shoes do to shoe polish. Calibration costs money, while taking the instrument out of service.

Instrument manufacturers have reacted to their customers’ inattention to calibration by providing guidelines, typically they specify a standard interval of 12 months between calibrations. But as we shall see, the idea of a ‘standard interval’ is misleading.

In fact, as soon as an engineer begins to pay attention to the questions of calibration, the who, the when and the how, they start to unravel to reveal other, more difficult questions.

This unravelling actually begins even before the first question – how often does the instrument need to be calibrated? – is asked. Because this hides a bigger question: what is calibration?

Interestingly, despite numerous attempts at harmonisation, no clear definition of calibration exists.

Even the industry’s standard for calibration, ISO17025, does not provide a definition on which engineers around the world can agree. It does formalise the terminology of calibration, but the decision on how to interpret this terminology is devolved to local regulators.

So in the UK for instance, the United Kingdom Accreditation Service (UKAS) dictates a strict interpretation of the standard for calibrators operating in the UK.

Some other authorities in other countries make a much looser interpretation. This means that an ISO17025 calibration in one country will be different from an ISO17025 calibration in another. In other words, the standard is not really a standard.

This inability to make a single definition is actually a fundamental characteristic of calibration. That is because the outcome of a calibration is always fundamentally uncertain – even if the uncertainty is minuscule. So different approaches to calibration reflect users’ different attitudes to uncertainty.

The most basic kind of calibration simply verifies the status of the instrument at the time of calibration: it tells the user the margin of error that applies to the measurements it takes.

The user is therefore accepting at least this margin of error initially, but with no guidance on the rate at which the instrument’s margin of error will increase over time.

At the next level, the calibration will correlate the instrument’s status with the manufacturer’s specifications, to show whether it is in spec or out of spec. A level above this provides for adjustments to bring the instrument to the centre of its specifications.

But without a set of ‘before calibration’ and ‘after calibration’ results, the user cannot determine whether the instrument was in spec in the days, weeks or months before calibration.

This could have unfortunate consequences in a production environment, for instance, where quality control procedures could be disrupted by a risk of ‘false passes’ given by an out-of-spec test instrument. This in turn exposes the manufacturer to the risk of field failures leading to costly product recalls.

So the next level of calibration, providing ‘before calibration’ and ‘after calibration’ results sheets, can help in the management of this risk: they will show if an instrument has operated in spec until the calibration.

Even here, however, the unravelling continues. Because when a calibrator certifies that an instrument is ‘in spec’, the user has a right to ask, ‘How certain can I be that the calibration is not itself giving a false pass?’

After all, there is a difference between a calibration performed at 100 test points and a calibration performed at 5,000 test points.

An instrument that is in spec at 100 test points might also be in spec at 4,900 other test points. Equally, it might not.

So what is the value of the extra certainty provided by the additional 4,900 test points? Would an additional 900 test points provide sufficient certainty? Or an additional 2,900? Or is the certainty provided even by a 5,000-test-point calibration insufficient?

This question of certainty is the reason why there is no ‘normal’ interval between calibrations, and no standard definition of calibration.

In an ideal world, in fact, each user would set the specification of the calibration process by reference to their application and operating conditions. In general, the greater the risk to operational or financial performance arising from small errors in test results, the more intensive and expensive the calibration should be.

Murray Coleman, Head of Customer Services, Anritsu (EMEA)

 

 

 

Related posts