Releases: rejuvyesh/PyCallChainRules.jl
Releases · rejuvyesh/PyCallChainRules.jl
v0.4.0
PyCallChainRules v0.4.0
- Add integration and example usage with Lux.jl and jax.
Merged pull requests:
- Add training examples for mixing Flux and Torch layers (#22) (@rejuvyesh)
- use BatchNorm removal function from
functorch(#25) (@rejuvyesh) - CompatHelper: bump compat for Functors to 0.3, (keep existing compat) (#26) (@github-actions[bot])
- Add Lux.jl integration with Jax (#27) (@rejuvyesh)
v0.3.2
PyCallChainRules v0.3.2
- Fix Adapt usage for CUDA array views.
Merged pull requests:
- Add example scripts for training with flux (#20) (@rejuvyesh)
- Add DiffEqFlux training on GPU with pytorch (#23) (@rejuvyesh)
v0.3.1
PyCallChainRules v0.3.1
- Fix Adapt usage for array
views.
Merged pull requests:
- Do type piracy in a single place (#17) (@rejuvyesh)
- Add an example for DiffEqFlux training with pytorch (#21) (@rejuvyesh)
v0.3.0
PyCallChainRules v0.3.0
- Support multiple output python functions
Merged pull requests:
- Support multi-output PyTorch/Jax functions (#16) (@rejuvyesh)
v0.2.1
PyCallChainRules v0.2.1
- Fix
Adaptusage.
Closed issues:
Merged pull requests:
- fix typo in readme (#12) (@terasakisatoshi)
- Fix Adapt usage (#15) (@rejuvyesh)
v0.2.0
PyCallChainRules v0.2.0
- Use
dlpack(via DLPack.jl) for zero-copy tensor sharing with python - Enable optional CUDA support
Merged pull requests:
- Use
dlpackfor array interop (#10) (@rejuvyesh)
v0.1.1
PyCallChainRules v0.1.1
- Fix support for pytorch modules with
buffers. - Add benchmark suite