Conversation
These changes improve efficiency and represent a step forward for GPU
training as well as the addition of force training.
- Updated reference to cite ML4Chem.
- Bumped version.
- Dscribe module is optional and not required.
- `backends.operations` renamed to `backends.oper`.
- `atomistic.models.base`: added new `feature_preparation` function that
prepares data in a way that vectorization is favored.
- `models.merger` the merger class now uses `torch.nn.ModuleList`
instead of a plain list.
- `atomistic.models.neuralnetwork`:
* `forward` is more efficient.
* All data is scattered once to avoid problems with Dask messing up
some tensors.
* More steps forward to force training.
- `data.handler`:
- New `get_largest_number_atoms` function for new vectorized forward
improvements and avoid breaking compatibility.
- New `target_keys` attribute that can be used to set any name for
targets.
- `atomistic.models.base`: `feature_preparation` moved to `DeepLearningModel` base class. - `atomistc.potentials` module is not working with new changes of data.
- `atomistic.features.gaussian`:
* New `dimension` attribute.
- `atomistic.models.rt`: A new model that can learn to predict retention
times.
- Fixed visualization module.
* master: Modifications needed for autoencoders module work with new loss structure. TopologicalLoss is now a class and supports loss weights. Use torch.norm instead of np.linalg.norm. Addition of reconstruction to TopologicalLoss and training seems to work. Addition of Topological loss function. General improvements. Addition of active module for active learning purposes.
The operations are done on a vectorized way and with numpy. This code has to be changed now to work with pytorch.
In this commit, forces can be obtained with autograd with:
```
forces = -torch.autograd.grad(outputs[0], example, create_graph=True, retain_graph=True)[0]
```
This is still to be tested during training.
* master: Updated documentation. Updated requirements for building documentation.
* master: Changes to Gaussian class and handler. Fix m2r2 issue. See sphinx-doc/sphinx#8395 Added version requirement of msgpack>=0.6.0 in requirement.txt m2r fix. neuralnetwork module compatible with new pytorch versions.
- Training is now working again. - Gaussian class was completely updated to new scheme of computation with and without Dask client. - "== None" renamed to "is None". - Removed unused imports.
This new version lets you add plots to subplots and change the position of scores in the parity plot.
This change can open the possibility to run models on GPUs.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.