Scope of Accreditation, Z540.1 & Z540.3

Started by DAVETEE, 03-07-2016 -- 20:08:49

Previous topic - Next topic

USMC kalibrater

#45
Quote from: Hawaii596 on 04-07-2016 -- 09:57:11
In my last ISO17025 audit, the auditor even made the K=2/3 relative to 95%/99% question.  I know and reasonably well understand the small difference between 95% and K=2 and between 99% and K=3.  ...  But in terms of actual measurand, what does it do?  Maybe some pragmatist uncertainty/statistical expert can give an example of its importance.
The coverage factor does not impact the measurand, the CF just asserts our confidence in any measured point with respect to the sample mean (or measurand in this case). Think about when you apply the CF in the uncertainty equation, at the end.
So if I give an uncertainty a confidence level of 2Sigma (95%) then all I am saying is that 95% of all measurements fell within 2 standard deviations of the mean (or measurand), the point I "think" Im measuring.  If I claim 99% then Im claiming 3 standard deviations.  So 99% confidence holds more possible results than 95% when applied to the same Uc.
Neither can directly be linked to greater overall accuracy or precision, that can only be determined by a review of the Type A analysis and certain types of Type B contributors.
As an example 99% would be more likely used on an instrument,such as a lab standard, with greater accuracy and precision where the variance fairly constant.  K=3 would allow for a more quantifiable representation of the data.  Remember if you are using an uncertainty with a 99% confidence level in a 95%CF calculation you must convert the 99% to 95%
A CF of 3 encompasses .66667 more area under the curve (assuming normal distribution) than a CF of 2.
see http://study.com/cimages/multimages/16/Normal_distribution_and_scales.PNG

NISTS two cents :D 
http://physics.nist.gov/cuu/Uncertainty/coverage.html

Jason
"Be polite, be professional, but have a plan to kill everybody you meet." -General James Mattis

Hawaii596

I do understand all of those details, as I'm the one who has to answer all the auditors questions during our ISO17025 audits; and I developed all of the tools we use and train the technicians on how to use them.  So I'm not so much looking for a mathematical answer as a philosophical one.  About 5 to 10% of our business is ISO17025 Accredited calibrations.  Among those customers, I have had personal conversations with just about every one of them, and almost none of them use the uncertainties at all.  I have one customer who told me if it were up to him, he would just get a basic certificate, as he has never even used the data points, let along the M.U. associated.  So I find it albeit, mathematically correct that K=2 equates to 0.9545, and 0.9500 equates to K=1.96, that it amounts to useless minutia to be concerned about that mathematic difference.  This is not a difference in reading, just in confidence limits expressed by the expanded uncertainty.  My experience has been that the greatest pragmatic value in the understanding of this difference is no more than impressing the auditor that you know what you are doing for audit purposes. 

It does not seem to me that there is significant pragmatic value beyond that.  So the answer I was seeking was if someone had some anecdotal thoughts to support pragmatically the importance of this difference (examples of real life circumstances where the difference betweem K=1.96 and K=2 has impacted anything).  I believe there may be none.  In that case, then, this is a "make-the-auditor-feel-good" issue.
"I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind."
Lord Kelvin (1824-1907)
from lecture to the Institute of Civil Engineers, 3 May 1883

silv3rstr3

If you want to spend all day calculating the measurement uncertainties including lead and connector loss calibrating a Simpson 260 knock yourself out.  I'm going home to watch some Netflix in the mean time!!  If you're able to acheive (4:1) I don't see the point of making unnecessary calculations.  The previous company I worked for told all their technicians to always list the best case uncertainties straight off the scope of accreditation.  Even when most of the time they weren't using the 5720A or 3458A to make those measurements.  I'm pretty sure you're supposed to list the measurement uncertainty for the actual standards you are using.  Super happy we don't have to be A2LA certified at my current employer!!
"They are in front of us, behind us, and we are flanked on both sides by an enemy that out numbers us 29:1. They can't get away from us now!!"
-Chesty Puller

Hawaii596

That is a no no.  What's on the Scope is best case, and doesn't normally reflect real measurement uncertainties.  I even heard a story of a major brand name OEM lab (that shall remain nameless) where I was shocked to learn through a friend who interviewed there, that that was how they were doing it.
"I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind."
Lord Kelvin (1824-1907)
from lecture to the Institute of Civil Engineers, 3 May 1883

N79

#49
Quote from: USMC kalibrater on 04-08-2016 -- 04:41:16
Quote from: Hawaii596 on 04-07-2016 -- 09:57:11
In my last ISO17025 audit, the auditor even made the K=2/3 relative to 95%/99% question.  I know and reasonably well understand the small difference between 95% and K=2 and between 99% and K=3.  ...  But in terms of actual measurand, what does it do?  Maybe some pragmatist uncertainty/statistical expert can give an example of its importance.
The coverage factor does not impact the measurand, the CF just asserts our confidence in any measured point with respect to the sample mean (or measurand in this case). Think about when you apply the CF in the uncertainty equation, at the end.
So if I give an uncertainty a confidence level of 2Sigma (95%) then all I am saying is that 95% of all measurements fell within 2 standard deviations of the mean (or measurand), the point I "think" Im measuring.  If I claim 99% then Im claiming 3 standard deviations.  So 99% confidence holds more possible results than 95% when applied to the same Uc.
Neither can directly be linked to greater overall accuracy or precision, that can only be determined by a review of the Type A analysis and certain types of Type B contributors.
As an example 99% would be more likely used on an instrument,such as a lab standard, with greater accuracy and precision where the variance fairly constant.  K=3 would allow for a more quantifiable representation of the data.  Remember if you are using an uncertainty with a 99% confidence level in a 95%CF calculation you must convert the 99% to 95%
A CF of 3 encompasses .66667 more area under the curve (assuming normal distribution) than a CF of 2.
see http://study.com/cimages/multimages/16/Normal_distribution_and_scales.PNG

NISTS two cents :D 
http://physics.nist.gov/cuu/Uncertainty/coverage.html

Sorry to keep picking nits, but this isn't quite right either. The 95% CL only means that there is a 95% chance that the actual value of the test unit (the value we would measure if we had a perfect 0 uncertainty measurement device) falls between the reported value +/- the reported expanded uncertainty.

For instance, if I report a measurement on a resistor at 1.000023 ohms, with a uncertainty of 1 ppm @ a 95% CL, I'm saying that there is a 95% chance the actual value of the resistor is within 1.000022 and 1.000024, with the most likely value being 1.000023.

You'll see that if you test out your version that you'll ALWAYS find that ~95% or your readings/measurements will fall within 2 standard deviations of the mean and that 99.7% of your readings will fall in 3 standard deviations of the mean. This is by definition. The math just works out that way. Of course, this only applies for normal distributions which all multiple-reading measurements should create.

N79

But I tend to agree about some of the silliness that goes into an accurate uncertainty calculation as anyone who uses the Welch–Satterthwaite equation to determine effective degrees of freedom has realized. There is an actual COMPLETELY SUBJECTIVE coefficient involved in each term that describes your "confidence" in that term... So if I feel very confident about a manufacturer's specs, I'd use 1, and if I feel less confident I use some value less than 1, but it's completely up to me to pull this number from my ass. As far as I know there is no good objective method to determine this.

N79

Quote from: silv3rstr3 on 04-08-2016 -- 11:25:22
If you want to spend all day calculating the measurement uncertainties including lead and connector loss calibrating a Simpson 260 knock yourself out.  I'm going home to watch some Netflix in the mean time!!  If you're able to acheive (4:1) I don't see the point of making unnecessary calculations.  The previous company I worked for told all their technicians to always list the best case uncertainties straight off the scope of accreditation.  Even when most of the time they weren't using the 5720A or 3458A to make those measurements.  I'm pretty sure you're supposed to list the measurement uncertainty for the actual standards you are using.  Super happy we don't have to be A2LA certified at my current employer!!

But you don't really know you're at a 4:1 TUR unless you perform the calculations, and you can't perform the calculations until you have your Type A data which is gathered during the measurement. I've had actual cases where the Type A uncertainty (in this case, the standard deviation of repeated measurements) completely swamped the other sources of uncertainty. If I had just compared the specs of the standard and test unit, it would have surpassed the 4:1 requirement, but once the measurement was performed and the Type A was included it couldn't even meet 1:1. So, at least in some cases, it is VERY important to have this calculated dynamically at the time of measurement... at least if you actually want integrity in your calibrations.

It's a shame there isn't good software that does all this for you.

N79

#52
Quote from: Hawaii596 on 04-08-2016 -- 08:43:03
I do understand all of those details, as I'm the one who has to answer all the auditors questions during our ISO17025 audits; and I developed all of the tools we use and train the technicians on how to use them.  So I'm not so much looking for a mathematical answer as a philosophical one.  About 5 to 10% of our business is ISO17025 Accredited calibrations.  Among those customers, I have had personal conversations with just about every one of them, and almost none of them use the uncertainties at all.  I have one customer who told me if it were up to him, he would just get a basic certificate, as he has never even used the data points, let along the M.U. associated.  So I find it albeit, mathematically correct that K=2 equates to 0.9545, and 0.9500 equates to K=1.96, that it amounts to useless minutia to be concerned about that mathematic difference.  This is not a difference in reading, just in confidence limits expressed by the expanded uncertainty.  My experience has been that the greatest pragmatic value in the understanding of this difference is no more than impressing the auditor that you know what you are doing for audit purposes. 

It does not seem to me that there is significant pragmatic value beyond that.  So the answer I was seeking was if someone had some anecdotal thoughts to support pragmatically the importance of this difference (examples of real life circumstances where the difference betweem K=1.96 and K=2 has impacted anything).  I believe there may be none.  In that case, then, this is a "make-the-auditor-feel-good" issue.

To me, there are completely separate reasons for reporting uncertainty and being 17025 accredited. The accreditation is really for your customers as an assurance that your practices have been audited by a third-party and you're not just pulling numbers out of your ass. That you have at least a quality system in place, you perform interlab comparisons, you are traceable to national standards, you attempt to calculate uncertainty, etc., etc. You could actually be the best lab in the world, but without accreditation it's hard to sell that fact. Your customers may not use the data you provide, but accreditation at least gives the impression that the data you provide, including the parts that they do actually use, is legit.

As far as uncertainty, as a metrologist you have to know that any reported measurement is meaningless without a properly calculated uncertainty. It's not easy or fun or cheap, but it is the only thing that gives any kind of value to your measurement/calibration.

Edited to add an actual answer to your question: the reported result of a measurement along with the uncertainty of that measurement always describes a statistical distribution. If the uncertainty is described as a coverage factor (K = 1, K = 2, etc.) or a confidence level (95%, 99%, etc.) that implies that the distribution fits a normal curve. There are only two parameters needed to create this distribution: the mean (the reported value) and the standard deviation (the normalized uncertainty). From these two values the customer has the entire distribution of the measurement result and can convert it to whatever form they like. If it were up to me, I'd probably have this distribution graphically represented on the test report, but having the two parameters represents the same thing. So, it is important to use the right coverage factor or confidence level when reporting your uncertainty because you want to describe the right distribution. As far as what difference it makes, probably not much, but why not try to be as accurate as possible.

silv3rstr3

The technicians that actually had integrity and respected our field of work brought up that argument openly to management.  But unfortunately at that company (the one I'm still being nice and not naming) was only concerned with the bottom line revenue number at the end of each month!  I care immensely about the quality of my work.  I've seen first hand a massive investigation when a CH-53D's rear rotor folded in on itself taking off on MCAS Futenma and crashed into a Japanese school on a Sunday!  What we do especially in the Defense and Space industry is crucial.  As far as knowing if the standards you're using are 4:1 it's not as complicated as some people here are arguing and making it out to be.  The DOD procedures tell you in table one if it doesn't meet 4:1.  If I have to substitute a standard at all in a procedure I pull the specifications and do simple math to make sure it's good enough to use.  I've only recently been trying to wrap my head around the guard banding concept and why it's necessary.  Plus most quality systems I've worked under stated that if a standard doesn't meet 4:1 you have to specify where and when on the certificate and/or data if required.  I attended a NCSLI event a year ago or so and they taught a class on uncertainty classes and importance.  The guy speaking impressed me with all the data and analysis he was doing for a manufacturing company.  He was able to figure out a defect in a product that no one could figure out because of the math alone.  I respect the principle of it all so don't get me wrong here.  However, there isn't a whole lot of room for the analysis in the 3rd Party Cal world or even now in an in house lab either.  The equipment most people are using is so obsolete it's ridiculous.  The majority of the auditors I've witnessed in each environment may actually know one or two things about Metrology.  They stick to the few things they know and harp on them because that's all they cared to learn.  I've heard A2LA has recently tightened up their audits in the last few years.  If that's true, good for them.  Before that it was a dog and pony show when they were there and like most things....it all boiled down to money in the end.  I've heard some outlandish stories from people at shady companies and how they passed their A2LA audits throughout the 2000 - 2012 years.  I plan to make this a career as I do like the complexity of our profession.  I respect all of you that can do a lab level uncertainty budget cause that's no walk in the park!!
"They are in front of us, behind us, and we are flanked on both sides by an enemy that out numbers us 29:1. They can't get away from us now!!"
-Chesty Puller

USMC kalibrater

#54
Quote from: Hawaii596 on 04-08-2016 -- 08:43:03
I do understand all of those details, as I'm the one who has to answer all the auditors questions during our ISO17025 audits; and I developed all of the tools we use and train the technicians on how to use them.  So I'm not so much looking for a mathematical answer as a philosophical one.  About 5 to 10% of our business is ISO17025 Accredited calibrations.  Among those customers, I have had personal conversations with just about every one of them, and almost none of them use the uncertainties at all.  I have one customer who told me if it were up to him, he would just get a basic certificate, as he has never even used the data points, let along the M.U. associated.  So I find it albeit, mathematically correct that K=2 equates to 0.9545, and 0.9500 equates to K=1.96, that it amounts to useless minutia to be concerned about that mathematic difference.  This is not a difference in reading, just in confidence limits expressed by the expanded uncertainty.  My experience has been that the greatest pragmatic value in the understanding of this difference is no more than impressing the auditor that you know what you are doing for audit purposes. 

It does not seem to me that there is significant pragmatic value beyond that.  So the answer I was seeking was if someone had some anecdotal thoughts to support pragmatically the importance of this difference (examples of real life circumstances where the difference betweem K=1.96 and K=2 has impacted anything).  I believe there may be none.  In that case, then, this is a "make-the-auditor-feel-good" issue.

Ahhh got ya, so a little story
95% = K = 2 (instead of 1.96) this is what I've experienced first hand.  It depends on the statistics course you take in college.  So Ive been working on my BS Physics for a few years and recently switched to a degree in Management just to get a BS done so I can get promoted (I plan on finishing my PHY degree later).  When I switched degrees I changed campuses to one close to my home.  (Tampa traffic...sucks!).  I needed to reestablish residency at the new campus because you need so many credit hours at the campus you intend to graduate from.
So after completing Calc1,2 and 3, Diff E, Computational Derivation and Engineering Stats 1 & 2 in my Physics curriculum I decided Business Calc and Business Stats would be a pretty low hurdle to jump yet still be interesting enough.  I was right on assumption one, wrong on assumption two.  These classes are really really really dumbed down, I mean really to the point they weren't even interesting.
I digress,  In Business Stats 95% always = 2 and 99% always = 3, I asked the instructor why she taught it this way and her answer boiled down to rounding.  Rarely in business stats are you much concerned with decimal places after two...a generic and cheap answer but never argue with the prof.  In her defense the book was written the same way she taught the course.
I'll have to go back and look in some of my notes and recalculate some of the experiments we did and see if .9545 changes the experiment results with any level of significance.  I think I have a few that it will, Im almost positive that any thing in the quantum arena it will.  I cant really think of any thing in Newtonian because the numbers aren't any where near the magnitudes of large or small you see in quantum.
I subscribe to the same theory that you do, in that, we do a lot of things that are just to "make the auditor happy" or "to make the auditor feel smarter".
Jason
"Be polite, be professional, but have a plan to kill everybody you meet." -General James Mattis

USMC kalibrater

Quote from: silv3rstr3 on 04-08-2016 -- 11:25:22
If you want to spend all day calculating the measurement uncertainties including lead and connector loss calibrating a Simpson 260 knock yourself out.  I'm going home to watch some Netflix in the mean time!!  If you're able to acheive (4:1) I don't see the point of making unnecessary calculations.  The previous company I worked for told all their technicians to always list the best case uncertainties straight off the scope of accreditation.  Even when most of the time they weren't using the 5720A or 3458A to make those measurements.  I'm pretty sure you're supposed to list the measurement uncertainty for the actual standards you are using.  Super happy we don't have to be A2LA certified at my current employer!!

Have you started watching the new season of Daredevil yet?  Ive been binge watching Sons of Anarchy...why I am still watching it is beyond me.  It really went down hill fast after about season 3
Jason
"Be polite, be professional, but have a plan to kill everybody you meet." -General James Mattis

USMC kalibrater

"However, there isn't a whole lot of room for the analysis in the 3rd Party Cal world or even now in an in house lab either.  The equipment most people are using is so obsolete it's ridiculous"

I think most companies today practice the better safe than sorry calibration model.  I know we do where I am at as well.  It feels far safer to keep calibrating even the most obviously waste full items.  Like the plant electricians DMM and the old analog meters on power supplies when they are being monitored by 6.5 digit DMMs. 
Many companies just go through the motions to cover whatever quality system they subscribe to in order to keep things simple.  Like others have pointed out, they pay for accredited calibrations yet don't even look at the data or need it, they calibrate everything that could possibly need calibrated even when the application doesn't require it or need it. 

Be lean my friends!
Jason
"Be polite, be professional, but have a plan to kill everybody you meet." -General James Mattis

silv3rstr3

What takes the cake for me was when I was instructed to calibrate a clipboard at my previous place of employment.  The clipboard had a built on ruler and calculator.  I actually made a professional looking excel datasheet for it.  Did 1" length measurements up to 12" using gage blocks.  And just to be a smart @$$ over the principle of how stupid this was, I included a Pass/Fail calculator accuracy table of addition, subtraction, division, and multiplication!!
"They are in front of us, behind us, and we are flanked on both sides by an enemy that out numbers us 29:1. They can't get away from us now!!"
-Chesty Puller

USMC kalibrater

I think it had a timer too, didn't it?
Jason
"Be polite, be professional, but have a plan to kill everybody you meet." -General James Mattis

silv3rstr3

Yeah, I believe it did! LOL.  It would have been funnier if they wanted uncertainty measurements for this clipboard on top of it!!!
"They are in front of us, behind us, and we are flanked on both sides by an enemy that out numbers us 29:1. They can't get away from us now!!"
-Chesty Puller