I'm getting uncertainties for my accelerometer calibrations that are higher than anyone else I am seeing published.
My numbers are based on a Dytran 3120B (reference accelerometer) and a VR 9500 controller.
For simplicity's sake the reference acceleometer is +/- 2% rdg and the controller is +/- 2% rdg (approximate numbers)
I'm using uniform distribution with K=2
With repeatability and temperature coefficient factored in I am getting an RSS uncertainty of:
5 Hz - 10 Hz = 8.48% rdg
10 Hz - 2 Khz = 6.10% rdg
2 Khz - 8 Khz = 7.41% rdg
8 Khz - 10 Khz = 8.02% rdg
My system is comparing the accel under test to my reference accel establishing the sensitivity and then comparing the frequency response of the accel under test to my reference accel.
I'm wondering if there is something obvious I am missing that explains why my numbers are so much higher than anyone else....
They do seem unusually high. I haven't utilized a vibration standard since the military. They do blow up C-4 in one of our manufacturing plants here testing satellite payloads disembarking from rockets. Most hardcore accelerometers and analysis system I've ever seen!
These are the average uncertainties I found for most companies. Most don't list the actual accelerometer standard used...
Acceleration - (0.01 - 10g)
(7 < 10Hz) - 4% Reading
(10 < 30Hz) - 3% Reading
(30 < 2000Hz) - 1.5% Reading
(2 to 10kHz) - 4% Reading
That's what I'm seeing for the average provider.
I think what is happening is the controller error is being eliminated as a contributor (If you had all the data I suppose a rationale could be made for that) and is left out of the calculation.