Skip to content

Version 0.1.0#21

Open
muammar wants to merge 24 commits intomasterfrom
0.1.0
Open

Version 0.1.0#21
muammar wants to merge 24 commits intomasterfrom
0.1.0

Conversation

@muammar
Copy link
Owner

@muammar muammar commented Aug 3, 2020

No description provided.

muammar and others added 24 commits April 12, 2020 12:55
These changes improve efficiency and represent a step forward for GPU
training as well as the addition of force training.

- Updated reference to cite ML4Chem.
- Bumped version.
- Dscribe module is optional and not required.
- `backends.operations` renamed to `backends.oper`.  
- `atomistic.models.base`: added new `feature_preparation` function that
  prepares data in a way that vectorization is favored.
- `models.merger` the merger class now uses `torch.nn.ModuleList`
  instead of a plain list.
- `atomistic.models.neuralnetwork`:
    * `forward` is more efficient.
    * All data is scattered once to avoid problems with Dask messing up
      some tensors.
    * More steps forward to force training.
- `data.handler`:
    - New `get_largest_number_atoms` function for new vectorized forward
      improvements and avoid breaking compatibility.
    - New `target_keys` attribute that can be used to set any name for
      targets.
- `atomistic.models.base`: `feature_preparation` moved to
  `DeepLearningModel` base class.
- `atomistc.potentials` module is not working with new changes of data.
- `atomistic.features.gaussian`:
    * New `dimension` attribute.
- `atomistic.models.rt`: A new model that can learn to predict retention
  times.
- Fixed visualization module.
* master:
  Modifications needed for autoencoders module work with new loss structure.
  TopologicalLoss is now a class and supports loss weights.
  Use torch.norm instead of np.linalg.norm.
  Addition of reconstruction to TopologicalLoss and training seems to work.
  Addition of Topological loss function.
  General improvements.
  Addition of active module for active learning purposes.
The operations are done on a vectorized way and with numpy. This code
has to be changed now to work with pytorch.
In this commit, forces can be obtained with autograd with:

```
    forces = -torch.autograd.grad(outputs[0], example, create_graph=True, retain_graph=True)[0]
```

This is still to be tested during training.
* master:
  Updated documentation.
  Updated requirements for building documentation.
* master:
  Changes to Gaussian class and handler.
  Fix m2r2 issue. See sphinx-doc/sphinx#8395
  Added version requirement of msgpack>=0.6.0 in requirement.txt
  m2r fix.
  neuralnetwork module compatible with new pytorch versions.
- Training is now working again.
- Gaussian class was completely updated to new scheme of computation
  with and without Dask client.
- "== None" renamed to "is None".
- Removed unused imports.
This new version lets you add plots to subplots and change the position of scores in the parity plot.
This change can open the possibility to run models on GPUs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant