Skip to content

Adding measurement components back to a measurement after iteratively solving for a value #108

@Boxylmer

Description

@Boxylmer

Hey! I've been using Measurements since I started using Julia, and I've finally run into a project where I need to be able to get components back out via uncertainty_components. I've considered a number of ways to recover the actual partial derivatives, but none are too attractive in terms of effort and how they would work applied generically to any problem.

I'm wondering if, upon calculating the iterative solution, I could go ahead and calculate the partial derivatives of the solution either with finite differences or some other method, then add them back in to the der field of the new measurement I construct. Is this doable?

To reiterative my problem a bit more concretely: here's what I'm doing now and also why it isn't working

  1. I've defined some function of measurements x, y, z where f(x, y, z) = a
  2. Later I have some other function of measurements u, v, w where g(u, v, w) = b
  3. Finally, I have some downstream processing where (among other things) h(a, b) = c, and c is the value I care about.
  4. I want to know how much x, y, z, u, v, and w contribute to the error of c, but the problem is, that in the function f, I solve a iteratively, then later go back and analytically solve for the uncertainty of a by solving the variance formula for whatever direct expression was originally available. The result of this is that I had to define a "new" measurement for a, since we originally found it by guessing (through some optimizer), which means it has no partial derivative / history of the pathway.
  5. If I can use finite difference partial derivatives for a by varying x, y, and z slightly, then plug their partials back into the der field with appropriate (val, err, tag) keys, then I can send a on its merry way and it's business as usual...

...Or so I think.

Does this problem make sense, and is there an obvious way around it in the measurments library? Or am I not seeing another better solution in finding error contributions?

For clarity, I already am able to get the overall uncertainty and have experimentally confirmed that these uncertainties are correct. The issue is in finding what contributes to uncertainty the most.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions