Skip to content

Releases: LuxDL/Lux.jl

LuxCore-v1.5.2

16 Jan 01:46
3a63012

Choose a tag to compare

LuxCore LuxCore-v1.5.2

Diff since LuxCore-v1.5.1

Merged pull requests:

Closed issues:

  • Layer summary page of docs i weirdly formatted (#1627)
  • Precompilation fails when Flux is in the project (#1629)

v1.29.2

14 Jan 21:30
c1a72f6

Choose a tag to compare

Lux v1.29.2

Diff since v1.29.1

Merged pull requests:

Closed issues:

  • Layer summary page of docs i weirdly formatted (#1627)

MLDataDevices-v1.17.1

10 Jan 23:46
4e4ea5c

Choose a tag to compare

MLDataDevices MLDataDevices-v1.17.1

Diff since MLDataDevices-v1.17.0

Merged pull requests:

LuxCore-v1.5.1

08 Jan 19:19

Choose a tag to compare

LuxCore LuxCore-v1.5.1

Diff since LuxCore-v1.5.0

v1.29.1

07 Jan 20:11
5073e55

Choose a tag to compare

Lux v1.29.1

Diff since v1.29.0

Merged pull requests:

Closed issues:

  • Proper Sparse Interfaces (#1014)
  • Update exporting to jax example to directly use new export functionality (#1588)
  • What is wrong with my custom layer? (#1610)
  • [MLDataDevices] AbstractCuSparseArray not defined in CUDA.jl v5.9.6 (#1615)

WeightInitializers-v1.3.0

06 Jan 17:10
272d5bb

Choose a tag to compare

WeightInitializers WeightInitializers-v1.3.0

Diff since WeightInitializers-v1.2.2

Merged pull requests:

Closed issues:

  • Proper Sparse Interfaces (#1014)
  • Add simple tests for other accelerators (#686)
  • DifferentiationInterface testing (#769)
  • Automatically cache allocations for JuliaGPU workloads (#1527)
  • Relax ForwardDiff version bound? (#1530)
  • Embedding Layer results in scalar indexing with Reactant? (#1546)
  • Enzyme Cache Invalidation Failure with v1.10 (#1551)
  • OneHotArrays + Reactant with cross entropy loss (#1556)
  • Overhead of convolution on AMD GPU (#1557)
  • LuxTestUtils not re-exported (#1579)
  • [MLDataDevices] failure at transfering non numerical array (#1580)
  • Mark LuxCore imports in Lux as public (#1584)
  • [MLDataDevices] broken support for isbits types movement (#1586)
  • Update exporting to jax example to directly use new export functionality (#1588)
  • Failed to precompile LuxLossFunctionsExt (#1591)
  • Update AMD support documentation to recommend using Reactant (#1592)
  • ERROR: KeyError: key "ReactantCore" not found (#1596)
  • TagBot: Manual intervention needed for releases (#1604)
  • What is wrong with my custom layer? (#1610)
  • [MLDataDevices] AbstractCuSparseArray not defined in CUDA.jl v5.9.6 (#1615)

MLDataDevices-v1.17.0

06 Jan 17:14
272d5bb

Choose a tag to compare

MLDataDevices MLDataDevices-v1.17.0

Diff since MLDataDevices-v1.16.1

Merged pull requests:

LuxLib-v1.15.0

06 Jan 15:01
5c39d59

Choose a tag to compare

LuxLib LuxLib-v1.15.0

Diff since LuxLib-v1.14.0

Merged pull requests:

Closed issues:

  • Proper Sparse Interfaces (#1014)
  • [MLDataDevices] AbstractCuSparseArray not defined in CUDA.jl v5.9.6 (#1615)

LuxCore-v1.5.0

06 Jan 05:19

Choose a tag to compare

LuxCore LuxCore-v1.5.0

Diff since LuxCore-v1.4.2

Merged pull requests:

  • feat: migrate DDIM to Reactant (#1158) (@avik-pal)
  • feat: precompile common workloads (#1485) (@avik-pal)
  • Add type-stable eltype control to device adaptors with comprehensive testing (#1498) (@Copilot)
  • chore: bump crate-ci/typos from 1.36.2 to 1.36.3 (#1499) (@dependabot[bot])
  • chore: bump crate-ci/typos from 1.36.3 to 1.37.2 (#1500) (@dependabot[bot])
  • CompatHelper: bump compat for BFloat16s to 0.6 for package CIFAR10, (keep existing compat) (#1501) (@github-actions[bot])
  • CompatHelper: bump compat for BFloat16s to 0.6 for package Qwen3, (keep existing compat) (#1502) (@github-actions[bot])
  • CompatHelper: bump compat for JLArrays to 0.3 for package test, (keep existing compat) (#1503) (@github-actions[bot])
  • ci: use 1.11 (#1504) (@avik-pal)
  • feat: JVP and VJP APIs for Reactant (#1506) (@avik-pal)
  • feat: batched_jacobian for Reactant (#1507) (@avik-pal)
  • chore: bump crate-ci/typos from 1.37.2 to 1.38.1 (#1508) (@dependabot[bot])
  • CompatHelper: bump compat for Optimization to 5 for package GravitationalWaveForm, (keep existing compat) (#1510) (@github-actions[bot])
  • CompatHelper: bump compat for Optimization to 5 for package OptimizationIntegration, (keep existing compat) (#1511) (@github-actions[bot])
  • CompatHelper: bump compat for BLISBLAS in [weakdeps] to 0.2 for package LuxLib, (keep existing compat) (#1512) (@github-actions[bot])
  • CompatHelper: bump compat for BLISBLAS to 0.2 for package test, (keep existing compat) (#1513) (@github-actions[bot])
  • feat: move rng to reactant device (#1517) (@avik-pal)
  • fix: donation errors for reactant (#1518) (@avik-pal)
  • feat: allow passing a sync option (#1519) (@avik-pal)
  • chore: bump actions/upload-artifact from 4 to 5 (#1526) (@dependabot[bot])
  • feat: support distributed training via TrainState API (#1529) (@avik-pal)
  • feat: support track numbers via reactant device API (#1533) (@avik-pal)
  • ci: run LuxCore + MLDataDevices testing on 1.12 (#1534) (@avik-pal)
  • docs: stop manual specification of precision config (#1536) (@avik-pal)
  • CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#1537) (@github-actions[bot])
  • CompatHelper: add new compat entry for ImageShow at version 0.3 for package DDIM, (keep existing compat) (#1538) (@github-actions[bot])
  • CompatHelper: add new compat entry for OhMyThreads at version 0.8 for package DDIM, (keep existing compat) (#1539) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.38.1 to 1.39.0 (#1541) (@dependabot[bot])
  • refactor: use EnzymeRules.@easy_rule in Lux.jl (#1542) (@avik-pal)
  • Fix identity_init filling entire submatrix instead of diagonal (#1544) (@Copilot)
  • test: use finite differencing for gradient testing for Reactant (#1545) (@avik-pal)
  • feat: more informative error on constructing trainstate with compiled function (#1547) (@avik-pal)
  • fix: minor reactant stuff + docs build (#1548) (@avik-pal)
  • feat: use a caching allocator for GPUArrays workflows (#1549) (@avik-pal)
  • Avoid reconstruction in Internal.unsafe_free! (#1550) (@AntonOresten)
  • test: update tests for enzyme (#1552) (@avik-pal)
  • fix: update how the error message looks (#1553) (@avik-pal)
  • Use |> for moving data to devices (#1559) (@abhro)
  • feat: return sequence properly + checkpointing + mincut (#1561) (@avik-pal)
  • fix(LuxLib): avoid extra copy if input and output are aliased (#1562) (@avik-pal)
  • ci: use dependabot for updating compat entries (#1563) (@avik-pal)
  • chore: bump actions/checkout from 5 to 6 (#1564) (@dependabot[bot])
  • chore: bump crate-ci/typos from 1.39.0 to 1.39.2 (#1565) (@dependabot[bot])
  • Put plot labels within plotting directives (#1566) (@abhro)
  • Remove unnecessary begin...end markers (#1567) (@abhro)
  • ci(docs): update cpu builds to use default gh actions (#1569) (@avik-pal)
  • Indent code blocks to make it part of ordered list (#1570) (@abhro)
  • Fix @example blocks in gpu_management.md (#1571) (@abhro)
  • chore: bump actions/download-artifact from 5 to 6 (#1575) (@dependabot[bot])
  • ci: fix download path for cuda ci (#1576) (@avik-pal)
  • chore: bump crate-ci/typos from 1.39.2 to 1.40.0 (#1578) (@dependabot[bot])
  • test: Metal test now works (#1581) (@avik-pal)
  • Add AbstractChar array support to MLDataDevices (#1582) (@Copilot)
  • test: streamline installing packages in tests (#1583) (@avik-pal)
  • Mark LuxCore imports as public using @public (#1585) (@Copilot)
  • Fix isbits type support for GPU device transfer (#1587) (@Copilot)
  • Add OpenCL support to MLDataDevices (#1590) (@VarLad)
  • docs: Recommend Reactant for AMD GPU support and add Tenstorrent backend (#1593) (@Copilot)
  • docs: fix docs build (#1594) (@avik-pal)
  • ci(docs): run all docs on cpu runners (#1595) (@avik-pal)
  • chore: bump actions/upload-artifact from 5 to 6 (#1598) (@dependabot[bot])
  • chore: bump actions/download-artifact from 6 to 7 (#1599) (@dependabot[bot])
  • chore: bump peter-evans/create-pull-request from 7 to 8 (#1600) (@dependabot[bot])
  • feat: proper annotations for xprof (#1603) (@avik-pal)
  • fix: forwarddiff support for gpu arrays (#1605) (@avik-pal)
  • docs: use new API to export jax models (#1606) (@avik-pal)
  • chore: update NPZ requirement to 0.4.3 in /docs (#1607) (@dependabot[bot])
  • chore: update MLUtils requirement to 0.4.8 in /docs (#1608) (@dependabot[bot])
  • ci: reactant don't preallocate on CI (#1611) (@avik-pal)
  • ci: fix how downgrade ci works (#1612) (@avik-pal)
  • feat: make forwarddiff a weak dependency for luxlib (#1613) (@avik-pal)
  • perf: more benchmarking results (#1614) (@avik-pal)
  • chore: bump crate-ci/typos from 1.40.0 to 1.41.0 (#1616) (@dependabot[bot])
  • fix: sparse arrays support (#1617) (@avik-pal)
  • chore: rename extensions in LuxCore (#1618) (@avik-pal)

Closed issues:

  • Rethinking eltype conversions in Adaptors (#1015)
  • Proper Sparse Interfaces (#1014)
  • Add simple tests for other accelerators (#686)
  • DifferentiationInterface testing (#769)
  • CUDA.jl along cannot trigger automatic GPU backend selection (#1245)
  • Fix remaining CUDA testing (#1457)
  • Global configuration for setting sync=true in training API (#1509)
  • Invalid buffer donation in new Reactant versions (#1514)
  • Lux.jl and Reactant and StableRNG interaction (#1515)
  • memory leak (?) on AMD MI250X GPUs (#1516)
  • Reactant get_device with sharding throws error inside of MLDataDevices, impossible to use with TrainState API (#1520)
  • Error "failed to run pass manager on module" only on Vector input (#1521)
  • Local MPI rank is always 0 if Ipopt solver is imported before Lux and MPI (#1525)
  • Automatically cache allocations for JuliaGPU workloads (#1527)
  • Relax ForwardDiff version bound? (#1530)
  • Reactant RNG handling broken in latest release (#1531)
  • Exporting to Jax manual entry segfaults with recent reactant (#1540)
  • Identity matrix initialization fills all entries with ones (#1543)
  • Embedding Layer results in scalar indexing with Reactant? (#1546)
  • Enzyme Cache Invalidation Failure with v1.10 (#1551)
  • OneHotArrays + Reactant with cross entropy loss (#1556)
  • Overhead of convolution on AMD GPU (#1557)
  • LuxTestUtils not re-exported (#1579)
  • [MLDataDevices] failure at transfering non numerical array (#1580)
  • Mark LuxCore imports in Lux as public (#1584)
  • [MLDataDevices] broken support for isbits types movement (#1586)
  • Update exporting to jax example to directly use new export functionality (#1588)
  • Failed to precompile LuxLossFunctionsExt (#1591)
  • Update AMD support documentation to recommend using Reactant (#1592)
  • ERROR: KeyError: key "ReactantCore" not found (#1596)
  • TagBot: Manual intervention needed for releases (#1604)
  • What is wrong with my custom layer? (#1610)
  • [MLDataDevices] AbstractCuSparseArray not defined in CUDA.jl v5.9.6 (#1615)

MLDataDevices-v1.16.1

05 Jan 18:39
be6ce7a

Choose a tag to compare

MLDataDevices MLDataDevices-v1.16.1

Diff since MLDataDevices-v1.16.0

Merged pull requests:

Closed issues:

  • Proper Sparse Interfaces (#1014)
  • Relax ForwardDiff version bound? (#1530)
  • Update exporting to jax example to directly use new export functionality (#1588)
  • Failed to precompile LuxLossFunctionsExt (#1591)
  • TagBot: Manual intervention needed for releases (#1604)
  • What is wrong with my custom layer? (#1610)
  • [MLDataDevices] AbstractCuSparseArray not defined in CUDA.jl v5.9.6 (#1615)