Scope of Accreditation, Z540.1 & Z540.3

Started by DAVETEE, 03-07-2016 -- 20:08:49

Previous topic - Next topic

briansalomon

Yes, I see the 99% confidence interval as the result of a 2.58 coverage factor and a 95% confidence interval the result of a 1.96 coverage factor. That's Important.

So if I'm saying my instrument is in tolerance 95% of the time (most labs) I would use K= 1.96 and if I'm saying it's 99% I'd use 2.58.

And you're right N79, they don't spell that out in the paper but they do make a good point about the expense curve as you increase the confidence level.



Bring technical excellence with you when you walk in the door every day.

N79

I'm not sure that's right. The 95% CL representing an expanded uncertainty usually only pertains to the measurement and does not predict future performance or value. The "95%" is basically a way of saying that you're 95% sure the (unknowable) true value of the measurement falls within the stated uncertainty with the most probable value being the reported value (the mean).

If I take 20 measurements of an item and report the mean of those measurements as my measured value, I can also report the combined uncertainty for that measurement as the standard deviation. These two parameters describe the distribution of values, assuming the samples and combined Type B sources of uncertainty fit a normal curve. You then can report an expanded uncertainty which is just multiplying this "std dev" by some factor (usually either 2 or 3 or 1.96 or 2.58). But all of these values represent the same distribution, so I don't really see the point in reporting this type of expanded uncertainty at all. Or maybe I'm missing something... which is probably the case!  :-D

briansalomon

Here is a good way to understand the basic idea.

Take the tolerance of the instrument and square it.

Take 5 readings and find the average. Then find the standard deviation and square that.

(standard deviation is slightly different from the mean)

Add these two together and find the square root.

For now, look at this as "all of the things that affect the accuracy of your readings" (unexpanded uncertainty, not just the tolerance)

Now multiply the unexpanded uncertainty by 1.96 to find the 95%confidence level  (expanded uncertainty)

Now multiply the unexpanded uncertainty by 2.59 to find the 99% confidence level (expanded uncertainty)

The two factors are applied to "all of the things that affect the accuracy of your readings" and it makes the uncertainty larger or smaller.

If the expanded uncertainty is larger, it makes the distribution wider it more likely the readings will fall within the distribution.

If the expanded uncertainty is smaller it's less likely the readings will fall within the distribution. If we're using the normal distribution this is the bell curve we're familiar with. The bell curve can get larger or smaller but it stays the same shape.

It sounds too simple but you're just multiplying your unexpanded uncertainty by a smaller or larger number to "expand" it to a 95% or 99% confidence level.

After you get comfortable with this, there are more things that affect the accuracy of your readings like temperature which have to be addressed.

Once someone explained it to me this way I got a better idea of what uncertainty represented.

Bring technical excellence with you when you walk in the door every day.

CalibratorJ

Quote from: briansalomon on 04-05-2016 -- 14:24:31
Take the tolerance of the instrument and square it.
I think it should be "Take the accuracy of the standard and square it".

I.E. if you are calibrating a handheld DMM you would RSS the output accuracy of your calibrator (5720, etc) and your standard deviations, not the tolerances of the instrument and the standard deviations.

briansalomon

Yes, thank you. It's the uncertainty of the calibration we would be interested in.
Bring technical excellence with you when you walk in the door every day.

CalibratorJ

But, that is also at the time of the calibration, it has nothing to do with future performance and/or guarantee that the DUT will remain within the stated tolerances for the duration of the calibration cycle.

The coverage factors only really come into play if you are guardbanding or if the manufacturer lists the instrument's accuracies in a specific coverage factor (Fluke loves to use K=1). If they don't list a coverage factor, I believe it is assumed to be 2, but I'm not an expert on uncertainties by any means.

N79

briansaloman, my point is that reporting the expanded uncertainty (at any confidence level or coverage factor) does NOT change the shape of the distribution that the reported value and reported uncertainty represent.

A simple way of thinking about this is imagining a set of data with mean of 0 and standard deviation of 1. If you plot this distribution (x-axis is the values, y values the probability of each value) you'd get your typical normal curve that peaks at 0. If you shaded the area under the curve between -1 and 1, you'd would be shading k = 1, or ~68% of the curve. If you shaded between -2 and 2 (two std devs, or k = 2) you'd end up shading ~95% of the curve. If you shaded between -3 and 3 you would end up shading ~99.7% of the curve and so on and so forth. I don't think it's possible to shade 100% of the curve unless you had an infinitely long piece of paper to draw this on.

So, the shape of the distribution doesn't change regardless of what coverage factor you report, so really it doesn't matter, all coverage factors and confidence levels of a gaussian distribution represent the same distribution with the same mean and variance or standard deviation (which is equivalent to the combined uncertainty in our world of metrology),

USMC kalibrater

Quote from: briansalomon on 04-04-2016 -- 15:22:18
So if I'm saying my instrument is in tolerance 95% of the time (most labs) I would use K= 1.96 and if I'm saying it's 99% I'd use 2.58.

...Wuuut?
Stating my uncertainty at 95% does not imply that 5% of the time it might be out of spec!  Confidence levels describe the probability of a reading being within a certain number of standard deviations...
In other words, if I claim 95% confidence level then I am stating that 95% of my readings will be within two standard deviations. 
Jason
"Be polite, be professional, but have a plan to kill everybody you meet." -General James Mattis

briansalomon

Thank you for correcting that for me. I should have begun my post by saying that I am no expert when it comes to uncertainty.

I'll try to be more thorough.
Bring technical excellence with you when you walk in the door every day.

Duckbutta

The back and forth on this topic brings in to stark relief for me the insanity that we are dealing with. The statisticians have taken over and ruined a once great industry. It's less and less about the measurements, and more and more about the math. That's not my cup of tea and the vast majority of calibrations don't require such rigorous analysis. That's not to say that there isn't any place for that stuff - especially with equipment getting more accurate by the day - but when it starts affecting Fluke 77 cals, I'm out.

ck454ss

Quote from: Duckbutta on 04-06-2016 -- 19:15:52
The back and forth on this topic brings in to stark relief for me the insanity that we are dealing with. The statisticians have taken over and ruined a once great industry. It's less and less about the measurements, and more and more about the math. That's not my cup of tea and the vast majority of calibrations don't require such rigorous analysis. That's not to say that there isn't any place for that stuff - especially with equipment getting more accurate by the day - but when it starts affecting Fluke 77 cals, I'm out.

So much this ^

The problem with these standards is it places to many "Shalls" in the requirement and not enough "Shoulds".  The specs leave no room for me to make a judgment on whether I really need all these uncertainties/Guard Banding/Etc. on my equipment.  As I said before, my maintenance crew does not need a 17025 cal on their meters used to verify voltage for Lock Out/Tag Out but because we are certified I have to do it.  Its just plain stupid and a waste of money imo.

silv3rstr3

I think it's over kill in most applications.  On a funny note though a co-worker reminded me of a funny story from the third party cal lab we used to work in together.  One of the shipping and receiving guys was attempting to remove old calibration stickers off pipettes with his teeth!!  We explained to him what pipettes are typically used for and he almost threw up!  Another shipping guy around the same time took 15kV to the stomach from an ESD gun that was shipped to us holding a charge.  Still makes me laugh thinking about stuff like that!
"They are in front of us, behind us, and we are flanked on both sides by an enemy that out numbers us 29:1. They can't get away from us now!!"
-Chesty Puller

RFCAL

I agree--This is nothing but a waste of time and $$.

Hawaii596

In my last ISO17025 audit, the auditor even made the K=2/3 relative to 95%/99% question.  I know and reasonably well understand the small difference between 95% and K=2 and between 99% and K=3.  But thankfully this was a smart auditor who wasn't so much nit-picking as testing me to see if I knew what I was doing.  But yes, this is one of those things that makes so little difference in anything.  I don't have time or inclination to spend time on this, but it would interesting to see what difference if any, that fraction of a percent in confidence makes.  It shifts the indeterminate threshold a teeny bit.  It bumps the M.U. up (or down a tiny bit), etc.  But in terms of actual measurand, what does it do?  Maybe some pragmatist uncertainty/statistical expert can give an example of its importance.
"I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind."
Lord Kelvin (1824-1907)
from lecture to the Institute of Civil Engineers, 3 May 1883

briansalomon

#44
Agreed, it's usually a waste of energy.

On a less serious note -  I do recall one instrument sent in to the PMEL for cal that was an IFF Transponder Test Set that was sent in with the complaint "Works in the IFF position, does not work in the OFF position"....

I'd just like to observe that I am at least 99% certain he wasn't following the procedure.
Bring technical excellence with you when you walk in the door every day.