feat: Add Hybrid Orchestrator and Cortex Model#739
feat: Add Hybrid Orchestrator and Cortex Model#739ReyJ94 wants to merge 1 commit intothe-virtual-brain:masterfrom
Conversation
- Introduce a new HybridOrchestrator module in tvb/simulator/hybrid/ for coordinating multi-model simulations (introducing heterogeneous simuluations in tvb). - Add cortex_model (zerlaut.py wrapper) to demonstrate how adapters can be used to make any model compatible with the new framework. PS : I started to add the functionality to load connectivity from a tvb file over the weekend (but i have not tested and might have bugs). However there is backwards compatibility with the old method for manual connectivity for which i ran simulations tests for it (works).:wq#
liadomide
left a comment
There was a problem hiding this comment.
Thanks @ReyJ94 for this PR.
Few cosmetics:
- as a TVB contribution, we would appreciate if you could keep the header from other tvb_library ...
.pymodules related to license and citation - try to replace the
printcalls with the usage oflogger(similar with other modules under tvb_library) HybridOrchestrator.pyseems huge to me. It is a developer choice what is a good and reasonable size. So, I am only asking:is the current size appropriate, or it would be possible to split ?- also maybe rename the module
orchestrator.py?
- also maybe rename the module
- could you provide some unit-tests for this new functionality, as separate pytest units ?
- would it be appropriate to have a notebook for documenting as well as testing this new functionality ?
|
I developed two years ago a framework for TVB based on co-simulation. I think co-simulation can be used for similar functionality as your proposition. Can you explain to me the difference between your proposition and the usage of co-simulator monitors? |
|
Dear @liadomide , thank you for your questions/suggestions. I apologise, i should have give more context to this. Regarding this addition to tvb, i was asked to come up with a way to have heterogeneous nodes in tvb, or having the possibility to have different tvb models in different brain regions and run simulations. This was the first solution i wanted to try. This git pull request was intended as a proof of concept and mostly for Marmaduke. He told me that i did not have to make it 100 % by myself, just upload the version i had and that he will make other improvements. So any further improvement (at least for the immediate future) could be coming soon from Marmaduke. At least for now I will be focusing on other projects, and will patiently wait to see Marmaduke's version as well. PS : Dear @lionelkusch , I'm afraid i cannot answer your question on the differences, since i was not aware of your co-simulation framework before this. I will take a look into it. |
|
hi @ReyJ94 thanks for the PR as mentioned the context for this work is in part Ebrains2 efforts on standard models where multiple neural mass models should be usable within the same simulation. This is what makes it different from @lionelkusch work on using multiple simulators within the same simulation. I have gone through the code, and addressing @liadomide comments, some parts could be moved to a notebook as a demo, reproduced also as a unit test. The core piece that should live in this module (or similar) is the @ReyJ94 could you please consider:
|
|
Dear all, i will provide unit tests and example notebooks, also split the main code, when i have provided further support for loading the connectivity from the zip file. Right now it does not work (the logic is not complete enough). The state shaping mechanism is not smart enough to allow this. The code right now is not complete. It works as intended with manual connections and a fake aggregated node that tricks tvb to think "everything is ok". However we want real nodes in tvb, created automatically from the connectivity file. This is step 2 of the work. |
|
I am reading the code here more carefully now, and while I could go line by line, I think it is more productive to provide high level comments: My most important comment is that we cannot loop through nodes explicitly at the Python-level, the resulting execution time will make this unusable in practice. If we were using C++ this would be an OK prototype approach but not Python. Instead, we have to think in terms of vectors of length number of nodes used by a given model, in order to amortize the cost of interpretation & control flow. In other words, I think the structure should be around multiple "subatlases" that comprise sets of regions with the same model (e.g. cortex, thalamus, cerebellum, etc). From there, we will need a pair-wise loop over subatlases, to compute sum of inputs to each atlas. Then we compute derivatives per subatlas, and return it as a flat array. Admittedly I am not familiar with all the features from recent Python version used here, but the approach using dataclasses is quite welcome, in line with the declarative configuration style used in TVB. I think down the line, this will be compatible with JAX's support for JIT compilation (using various packages to support JAX compilation of dataclasses), which will make much of the overhead related to coordinating between models go away. So, concretely, I would like to see us change the approach from a node wise to more array oriented, and expressing the subatlases in terms of some dataclasses which are then interpreted during the dfun in the way I outlined in the previous paragraph. Less important, but there's more lines of code doing validation than I would include; the important validation will come from scientific demos which are consistent with empirical data. I'm not saying to remove anything, but we can relax a bit about, e.g. shape errors, since these are easily debugged. |
Uh oh!
There was an error while loading. Please reload this page.