One issue I have with Fluke's "uncertainty" is that it isn't consistent between models, and I have to be careful. Some are at 99%, some are at 95%, some have both "absolute" and "relative" uncertainty, etc. I remember a number of years ago getting gigged by an eagle eyed auditor because (and I think this was on the 5520A maybe??), the specs were at 99% and I treated them as k=2 (oops).
The other side of this is that since we have to calculate "Measurement Uncertainty" (which is more than the "uncertainty" of the standard), there are other details that need to be RSSed together any way.
So when dealing with a Fluke spec in my uncertainty budget, I now pay close attention: What % confidence is their spec? Is it absolute or relative uncertainty? If I use their Relative Uncertainty spec, I need to add cal uncertainty from the cal cert for that standard. Note that if they don't state whether it is relative or absolute uncertainty, I include MU from the Cal Cert. What is the applicable temperature range of their spec? What is the cal interval of their spec? I nitpick my way through a lot of their ever changing details when building a budget.
And I make sure my budget has the big six (as applicable): Spec of the Standard, Resolution of the Standard, Resolution of the T.I., MU from Standard Cal Cert, Repeatability, Temp Coefficient (only if applicable).