AmiciObjective: check that sensitivities wrt all relevant parameters …#1416
AmiciObjective: check that sensitivities wrt all relevant parameters …#1416dweindl wants to merge 11 commits intoICB-DCM:developfrom
Conversation
…can be computed Closes ICB-DCM#1414
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## develop #1416 +/- ##
============================================
- Coverage 83.88% 31.05% -52.83%
============================================
Files 160 160
Lines 13096 13105 +9
============================================
- Hits 10985 4070 -6915
- Misses 2111 9035 +6924 ☔ View full report in Codecov by Sentry. |
PaulJonasJost
left a comment
There was a problem hiding this comment.
Thanks, the check looks sound to me, but probably good if @FFroehlich has another look at it.
Conceptual question: I get it right, that if currently (and henceforth) i create the objective only with PetabImporter.create_objective(), i get an objective back, which will artificially compute gradients/sensitivities for all parameters independent of whether they are fixed or not? artificially in the sense that for fixed parameters it returns 0? So the only difference between passing it/not passing it to .create_problem would be the dimensionality?
| This object is expected to be passed to | ||
| :class:`PetabImporter.create_problem` to correctly handle fixed | ||
| parameters. |
There was a problem hiding this comment.
Should we have a test to check the expected behavior if one does not pass it to PetabImporter.create_problem?
There was a problem hiding this comment.
The problem is that we don't know the expected behaviour if one does not pass it to PetabImporter.create_problem
| objective = importer.create_problem( | ||
| objective=importer.create_objective(max_sensi_order=1) | ||
| ).objective |
There was a problem hiding this comment.
This might be a minor thing, but i find this a really unintuitive way of creating the objective function correctly.
There was a problem hiding this comment.
The problem is, that petab fixed parameters are only handled in create_problem, but not in create_objective. Maybe this could/should be changed. Not completely sure what might rely on the current behaviour.
There was a problem hiding this comment.
well both the importer and the problem should be aware of fixes parameters, so I agree that one should naturally expect problem.create_objective, importer.create_objective and importer.create_problem().objective to be equivalent and that passing importer.create_objective(max_sensi_order=1) to importer.create_problem isn't necessary.
| """Get the model parameters for which sensitivities are missing. | ||
|
|
||
| Get the model parameters w.r.t. which sensitivities are required, | ||
| but aren't available missing. |
There was a problem hiding this comment.
| but aren't available missing. | |
| but aren't available. |
| objective = importer.create_problem( | ||
| objective=importer.create_objective(max_sensi_order=1) | ||
| ).objective |
There was a problem hiding this comment.
well both the importer and the problem should be aware of fixes parameters, so I agree that one should naturally expect problem.create_objective, importer.create_objective and importer.create_problem().objective to be equivalent and that passing importer.create_objective(max_sensi_order=1) to importer.create_problem isn't necessary.
…can be computed
Adds a check whether the amici model in an
AmiciObjectiveis able to compute sensitivities w.r.t. to the objective parameters.Closes #1414
This raised the question of whether an objective can have implicitly fixed parameters. I.e. whether it should be considered a use case, that some objective parameters that may change the objective value, have zero entries in the gradient. This is what's happening if one uses
PetabImporter.create_objective()without passing it throughPetabImporter.create_problem(). So far, pypesto is rather unclear on that. Whether this affects optimization will depend on into whichProblemthis objective will be put.My assumption here is that this is undesirable and that changes in the objective value should be reflected in the gradient.