Hawaii596, (Can I call you Jerry?)
You are the last person that should be posting anything relating to Measurement Uncertainty. I stumbled across a post you made in a different forum and it's obvious to anyone that has even a rudimentary understanding of the concept, that you have no idea what you're talking about. UNCERTAINTY and ACCURACY are NOT INTERCHANGEABLE. They're apples and oranges, both related but not the same thing. So before you go spouting off like you have some sort of credibility on this issue, I suggest you do some more research. Try Google. You'd be amazed at what you'll find.
Here is a link to the post I'm referring to (It's the 4th post in the thread):
http://elsmar.com/Forums/showthread.php?t=31898
And don't try to deny it's you, there's no mistaking that face of yours. If you're the Moderator on that forum, those people are in trouble. That's the equivalent of the blind leading the blind. It may be a little unfair of me to expect a Navy trained calibrator to have a firm grasp of advanced metrology principles, and for that I apologize.
WOW, a little harsh. :-o
It is true that stated accuracy of the DUT has nothing to do with Meas. Uncert. Specified Accuracy of the DUT is not even a factor in calculating measurement uncertinty either, except the LSD.
In a generic sense, UNCERTAINTY and ACCURACY are pretty closely related.
They are used differently and applied a little differently. I have honestly long detested such rhetorical/semantic issues. I hate the uses of the terms "WORKING STANDARD", "PRIMARY STANDARD," "SECONDARY STANDARD," etc.
I equally hate such battles about the fine differences between ACCURACY and UNCERTAINTY. Yes, certainly, UNCERTAINTY refers to the algebraic combination of contributors to potential error. And ACCURACY is a number (which is statistically derived during design much in the same way as UNCERTAINTY). However, UNCERTAINTY takes into account the so-called ACCURACY (which is, by the way, an ambiguous term), along with the other external contributors in a measurement.
When you get down to the base definition of UNCERTAINTY, it is potential INACCURACY of the measurement.
I get pretty fed up with the Metrological Eggheads out there who love nothing more than the sound of their own voice as they use the big words. It is in my estimation some eggheads way of making something pretty straightforward sound bigger than it is. I know how to do RSS calculations. I know how to interpret and calculate specifications.
The uncertainty of a measurement is the statistical probability of how far the measurand will lie from the absolute nominal. The accuracy of a measurement is the statistical probability specified that in making a measurement with a given instrument, of how far it will be from the absolute nominal.
Accuracy, Uncertainty??? Yes, two different terms with two differing applications that have very minute differences.
MEASUREMENT UNCERTAINTY: non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used.
MEASUREMENT ACCURACY: closeness of agreement between a measured quantity value and a true quantity value of a measurand.
Thanks Coup, we'll get along just fine. The point that I was trying to make in my recent posts is that if you are only providing Type B uncertainties in your scope, you're only telling half the story. Any nitwit with a calculator can compute measurment uncertainty that way. An equally important factor, perhaps more so, is the person making the measurement. And they haven't, to the best of my knowledge, invented a calculator that can quantify that. You and I both know that two different people performing the same measurement can come up with vastly different results. You can find examples in your Lab everyday. It all boils down to EXPERIENCE and TECHNIQUE. Just knowing the uncertainty of the measurement standard is of little use if, for instance, a guy is trying to make a high accuracy DC Volts measurement with banana leads. That may be a little extreme, but you get the point. Another way of saying it is that not all measurements made with the same standard are created equal.
Quote from: Hawaii596 on 01-19-2009 -- 14:26:55
In a generic sense, UNCERTAINTY and ACCURACY are pretty closely related.
They are used differently and applied a little differently. I have honestly long detested such rhetorical/semantic issues. I hate the uses of the terms "WORKING STANDARD", "PRIMARY STANDARD," "SECONDARY STANDARD," etc.
I equally hate such battles about the fine differences between ACCURACY and UNCERTAINTY. Yes, certainly, UNCERTAINTY refers to the algebraic combination of contributors to potential error. And ACCURACY is a number (which is statistically derived during design much in the same way as UNCERTAINTY). However, UNCERTAINTY takes into account the so-called ACCURACY (which is, by the way, an ambiguous term), along with the other external contributors in a measurement.
When you get down to the base definition of UNCERTAINTY, it is potential INACCURACY of the measurement.
I get pretty fed up with the Metrological Eggheads out there who love nothing more than the sound of their own voice as they use the big words. It is in my estimation some eggheads way of making something pretty straightforward sound bigger than it is. I know how to do RSS calculations. I know how to interpret and calculate specifications.
The uncertainty of a measurement is the statistical probability of how far the measurand will lie from the absolute nominal. The accuracy of a measurement is the statistical probability specified that in making a measurement with a given instrument, of how far it will be from the absolute nominal.
Accuracy, Uncertainty??? Yes, two different terms with two differing applications that have very minute differences.
MEASUREMENT UNCERTAINTY: non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used.
MEASUREMENT ACCURACY: closeness of agreement between a measured quantity value and a true quantity value of a measurand.
Maybe I am reading you wrong, are you saying that "UNCERTAINTY takes into account the so-called ACCURACY" of the DUT or the stated uncert of the the standard? The stated accuracy of the DUT has no effect on the measurement uncertainty other than the LSD of the DUT. You weren't very clear on your statement.
I wish some of the "gurus" out there who really knew their stuff would write a "bible" on uncertainties that us non-eggheads could understand (think "uncertainties for dummies"). I'm up to my armpits in uncertainties, and having attended a number of the short courses that they put on at NCSL conferences, have had my fair share of training, and most of the subject matter still flies over my head. For example, today I grappled with uncertainty calculations for a torque tester. What all comes into play here? I know the weights have an uncertainty component, so does the length of the torque arm, how do I combine them? Are there more things to consider? What sort of distribution do I assign each component? I know there are many books on uncertainties, e.g. ISA's "Measurement Uncertainty Methods and Applications", ASQ's metrology handbooks, the Fluke book (Calibration: Philosophy and Practice), NIST publications e.g. Handbook 44 on weights, etc., but it sure would be nice to have some basic uncertainty examples for different parameters distilled into one, or at most a few volumes.
What kills me is all the uncertainty calculators out there (you know, like those published by Quametec, Integrated Sciences Group, and the old Uncertainty Calculator 3.2, from Compaq Computer Corp.) have dozens calculations for a single measurement, while most of the equipment being calibrated has many readings. (Do you have a Fluke 5790A? Do you accept the uncertainties Fluke lists on the certificate as gospel and put that on your A2LA scope, or do you actually take the time to do a study? What about calibrating a 5520A with that 5790A? Do you trust the uncertainty calculation routines built into MET/CAL?) Yes, I'm sure many labs out there have developed their own in-house tools, spreadsheets, etc. for managing their uncertainty budgets in "bulk", but due to the competitive nature of this business, they want to guard their knowledge leaving the rest of us to re-invent the wheel (the subject of another rant).
Quote from: scottbp on 01-19-2009 -- 17:46:33
I wish some of the "gurus" out there who really knew their stuff would write a "bible" on uncertainties that us non-eggheads could understand (think "uncertainties for dummies"). I'm up to my armpits in uncertainties, and having attended a number of the short courses that they put on at NCSL conferences, have had my fair share of training, and most of the subject matter still flies over my head. For example, today I grappled with uncertainty calculations for a torque tester. What all comes into play here? I know the weights have an uncertainty component, so does the length of the torque arm, how do I combine them? Are there more things to consider? What sort of distribution do I assign each component? I know there are many books on uncertainties, e.g. ISA's "Measurement Uncertainty Methods and Applications", ASQ's metrology handbooks, the Fluke book (Calibration: Philosophy and Practice), NIST publications e.g. Handbook 44 on weights, etc., but it sure would be nice to have some basic uncertainty examples for different parameters distilled into one, or at most a few volumes.
What kills me is all the uncertainty calculators out there (you know, like those published by Quametec, Integrated Sciences Group, and the old Uncertainty Calculator 3.2, from Compaq Computer Corp.) have dozens calculations for a single measurement, while most of the equipment being calibrated has many readings. (Do you have a Fluke 5790A? Do you accept the uncertainties Fluke lists on the certificate as gospel and put that on your A2LA scope, or do you actually take the time to do a study? What about calibrating a 5520A with that 5790A? Do you trust the uncertainty calculation routines built into MET/CAL?) Yes, I'm sure many labs out there have developed their own in-house tools, spreadsheets, etc. for managing their uncertainty budgets in "bulk", but due to the competitive nature of this business, they want to guard their knowledge leaving the rest of us to re-invent the wheel (the subject of another rant).
Uncertainty calculators are a death trap!! DO NOT USE THEM!!!!
I can't help you with the Torque Tester, I don't have enough knowledge in that area of the lab to offer a valid analysis. But I will say this, I feel bad for you, Chief. It sounds like you got the cart before the horse. You sound like a nice guy, eager to learn, want to do the right thing. Do yourself a favor, learn all you can about the measurements that you are making, understand the theory of what you're doing, the test methodology. Ask questions, lots of questions. Don't just read the procedure and go through the motions. There's way too many techs out there like that already. You'll never seperate yourself from the pack like that. Once you have a firm grasp of what you're doing and what can affect your measurement process (i.e. types of cables, connectors, loading effects) you'll be light years ahead of your peers who are doing uncertainty by calculation. There is no magic uncertainty calculator that will reduce your uncertainties for you, only you can do that. Knowledge and experience are the key.
Quote from: scottbp on 01-19-2009 -- 17:46:33
For example, today I grappled with uncertainty calculations for a torque tester. What all comes into play here? I know the weights have an uncertainty component, so does the length of the torque arm, how do I combine them? Are there more things to consider? What sort of distribution do I assign each component?
Use a rectangular distribution and divide by the square root of 3.
You can also with capability to measure the weights perform some measurements to create help with a type A, same can be said for measuring the effective length of the arm. I recommend getting your student number as close to 1 as possible.
While doing this you can also have a few technicians perform the measurements so you can see if the process is solid and the techs have good technique.
Quote from: MIRCS on 01-20-2009 -- 06:27:29
While doing this you can also have a few technicians perform the measurements so you can see if the process is solid and the techs have good technique.
What I'm attempting to do here is create a spreadsheet where all the technician has to do is check off which weights were used, which arm was used, and then it'll compute the torque (corrected to local gravity) and show the type B uncertainty of the measurement based on the collected uncertainties of the weights and arms used. There will be an area to put down a number of repeat readings to compute a standard deviation for a type A uncertainty. (Of course, if all the readings are exactly the same, then it'll use the least significant digit divided by the square root of 12, which would be the same as the standard deviation if only one reading out of 12 differed by one digit at the last decimal place.)
What drives me to do this is when technicians who don't know anything about uncertainties email me, the engineer, conductor, and brakeman rolled into one guy, the results of a torque calibration performed a hundred miles away at a branch lab, saying they need the uncertainty report in a hurry because the unit is going out on the truck the next day, and I'm more of a K1/K8 type and know very little about Phys/D.
I have an "Uncertainty Analysis of Calibration System produced by CDI". Maybe this will help you with your Torque Tester Uc budget. It is too large to upload here so I could email it to you.
Mike Boothe
quality@callabco. com