Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Reopening because of accidental deletion of #743 , copying desc here:
This is a another take on hybrid models, where multiple models are used at once, which is a more array oriented than #739 and closer to TVB style with traits etc. It introduces the following elements to the TVB datatypes (naming suggestions welcome)
From there, it turns out that flexible state shapes creates surprise in many places, where it's assumed that the state variables are homogenous across nodes. For instance, the integrator noise is set per state variable, however in different models, these need to be set with different values.
Other major nuances are related to monitors, but I think some straightforward compromises are available: for "debug" monitors like raw or temporal average, the monitor will be applied per model to produce a per-subnetwork trace. Monitors more "neuroimaging" like will choose a particular state variable and apply the forward model (gain matrix, BOLD equations) to it.
Stimulation is another open question, but I wonder if, deparating from the monolithic style, a stimulus cannot just be a "subnetwork" with a projection to the nodes being stimulated? Ideas welcome.
Performance is terrible w/ ~5-10k iter/s on default connectome w/ two models, but this is expected w/ the naive implementation and will improve dramatically after massaging for jax or a c++ impl.
recent feedback
still some bugs and api is not intuitive from Frederico Tesler and Roberta Lorenzi.
implementation status
cvar=[0,0,0]etc)reconfigure_boundaries_and_clamping...