Skip to content

Releases: jcmgray/cotengra

v0.7.5

12 Jun 23:52

Choose a tag to compare

Enhancements

  • ContractionTree.print_contractions: fix show_brackets option, show preprocessing steps with original inputs indices.
  • Only warn about missing recommended dependencies when they otherwise would be used, i.e. for hyper optimization only.

Full Changelog: v0.7.4...v0.7.5

v0.7.4

14 May 00:06

Choose a tag to compare

Bug fixes

  • Fix and add edge case test for optimize=() #55.

v0.7.3

13 May 00:28

Choose a tag to compare

Enhancements

  • Allow manual path specification as edge path, e.g. optimize=['b', 'c', 'a']
  • Add optimize="edgesort" (aliased to optimize="ncon" too), which performs a contraction by contracting edges in sorted order, thus can be entirely specified by the graph labelling.
  • Add edge_path_to_ssa and edge_path_to_linear for converting edge paths to SSA and linear paths respectively.
  • ContractionTree.from_path: allow an edge_path argument. Deprecate ContractionTree.from_edge_path method in favor of this.
  • Speed up ContractionTree.get_path to ~ n log(n).

v0.7.2

01 Apr 16:46

Choose a tag to compare

  • When contracting with slices and strip_exponent enabled, each slice result is returned with the exponent separately, rather than matching the first, these are are now combined in gather_slices.
  • If check_zero=True, strip_exponent=True, and a zero slice is encountered, the returned exponent will now be float('-inf') rather than 0.0 for compatbility with the above.

Full Changelog: v0.7.1...v0.7.2

v0.7.1

25 Feb 02:04

Choose a tag to compare

  • ReusableHyperOptimizer and DiskDict, allow splitting key into subdirectory structure for better performance. Enabled for new caches by default.
  • High level interface functions accept the strip_exponent kwarg, which eagerly strips a scaling exponent (log10) as the contraction proceeds, avoiding issues to do with very large or very small numeric values.

Full Changelog: v0.7.0...v0.7.1

v0.7.0

07 Jan 23:44

Choose a tag to compare

Enhancements

  • Add cmaes as an optlib method, use it by default for 'auto' preset if available since ih has less overhead than optuna.
  • Add HyperOptimizer.plot_parameters_parallel for plotting the sampled parameter space of a hyper optimizer method in parallel coordinates.
  • Add ncon interface.
  • Add utils.save_to_json and utils.load_from_json for saving and loading contractions to/from json.
  • Add examples/benchmarks with various json benchmark contractions
  • Add utils.networkx_graph_to_equation for converting a networkx graph to cotengra style inputs, output and size_dict.
  • Add "max" as a valid minimize option for optimize_optimal (also added to cotengrust), which minimizes the single most expensive contraction (i.e. the cost scaling)
  • Add RandomOptimizer, a fully random optimizer for testing and initialization purposes. It can be used with optimize="random" but is not recommended for actual optimization.
  • Add PathOptimizer to top-level namespace.
  • ContractTreeCompressed.from_path: add the autocomplete option
  • Add option overwrite="improved" to reusable hyper optimizers, which always searches but only overwrites if the new tree is better, allowing easy incremental refining of a collection of trees.
  • einsum via bmm (implementation="cotengra") avoids using einsum for transposing inputs.
  • add example {ref}ex_extract_contraction doc

Bug fixes

Full Changelog: v0.6.2...v0.7.0

v0.6.2

22 May 00:54

Choose a tag to compare

Bug fixes

  • Fix final, output contractions being mistakenly marked as not tensordot-able.
  • When contracting with implementation="autoray", don't require a backend to have both einsum and tensordot, instead fallback to cotengra's own.

Full Changelog: v0.6.1...v0.6.2

v0.6.1

15 May 21:37

Choose a tag to compare

What's Changed

Breaking changes

  • The number of workers initialized (for non-distributed pools) is now set to, in order of preference, 1. the environment variable COTENGRA_NUM_WORKERS, 2. the environment variable OMP_NUM_THREADS, or 3. os.cpu_count().

Enhancements

  • add RandomGreedyOptimizer which is a lightweight and performant randomized greedy optimizer, eschewing both hyper parameter tuning and full contraction tree construction, making it suitable for very large contractions (10,000s of tensors+).
  • add optimize_random_greedy_track_flops which runs N trials of (random) greedy path optimization, whilst computing the FLOP count simultaneously. This or its accelerated rust counterpart in cotengrust is the driver for the above optimizer.
  • add parallel="threads" backend, and make it the default for RandomGreedyOptimizer when cotengrust is present, since its version of optimize_random_greedy_track_flops releases the GIL.
  • significantly improve both the speed and memory usage of SliceFinder
  • alias tree.total_cost() to tree.combo_cost()

Full Changelog: v0.6.0...v0.6.1

v0.6.0

10 Apr 21:37

Choose a tag to compare

Bug fixes

  • all input node legs and pre-processing steps are now calculated lazily, allowing slicing of indices including those 'simplified' away #31.
  • make tree.peak_size more accurate, by taking max assuming left, right and parent present at the same time.

Enhancements

  • add simulated annealing tree refinement (in path_simulated_annealing.py), based on "Multi-Tensor Contraction for XEB Verification of Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung (arXiv:2108.05665), and the "treesa" implementation in OMEinsumContractionOrders.jl by Jin-Guo Liu and Pan Zhang. This can be accessed most easily by supplying opt = HyperOptimizer(simulated_annealing_opts={}).
  • add ContractionTree.plot_flat: a new method for plotting the contraction tree as a flat diagram showing all indices on
    every intermediate (without requiring any graph layouts), which is useful for visualizing and understanding small contractions.
    image
  • HyperGraph.plot: support showing hyper outer indices, multi-edges, and automatic unique coloring of nodes and indices (to match plot_flat).
  • add `ContractionTree.plot_circuit for plotting the contraction tree as a circuit diagram, which is fast and useful for visualizing the traversal ordering for larger trees.
    image
  • add ContractionTree.restore_ind for 'unslicing' or 'unprojecting' previously removed indices.
  • ContractionTree.from_path: add option complete to automatically complete the tree given an incomplete path (usually disconnected subgraphs - #29).
  • add ContractionTree.get_incomplete_nodes for finding all uncontracted childless-parentless node groups.
  • add ContractionTree.autocomplete for automatically completing a contraction tree, using above method.
  • tree.plot_flat: show any preprocessing steps and optionally list sliced indices
  • add get_rng as a single entry point for getting or propagating a random number generator, to help determinism.
  • set autojit="auto" for contractions, which by default turns on jit for backend="jax" only.
  • add tree.describe for a various levels of information about a tree, e.g. tree.describe("full") and tree.describe("concise").
  • add ctg.GreedyOptimizer and ctg.OptimalOptimizer to the top namespace.
  • add ContractionTree.benchmark for for automatically assessing hardware performance vs theoretical cost.
  • contraction trees now have a get_default_objective method to return the objective function they were optimized with, for simpler further refinement or scoring, where it is now picked up automatically.
  • change the default 'sub' optimizer on divisive partition building algorithms to be 'greedy' rather than 'auto'. This might make individual trials slightly worse but makes each cheaper, see discussion: #27.

Full Changelog: v0.5.6...v0.6.0

v0.5.6

08 Dec 01:27

Choose a tag to compare

Bug fixes

  • fix a very rare but very infuriating bug related somehow to ReusableHyperOptimizer not being thread-safe and returning the wrong tree, especially on github actions

Full Changelog: v0.5.5...v0.5.6