Releases: jcmgray/cotengra
Releases · jcmgray/cotengra
v0.7.5
Enhancements
ContractionTree.print_contractions: fixshow_bracketsoption, show preprocessing steps with original inputs indices.- Only warn about missing recommended dependencies when they otherwise would be used, i.e. for hyper optimization only.
Full Changelog: v0.7.4...v0.7.5
v0.7.4
v0.7.3
Enhancements
- Allow manual path specification as edge path, e.g.
optimize=['b', 'c', 'a'] - Add
optimize="edgesort"(aliased tooptimize="ncon"too), which performs a contraction by contracting edges in sorted order, thus can be entirely specified by the graph labelling. - Add
edge_path_to_ssaandedge_path_to_linearfor converting edge paths to SSA and linear paths respectively. ContractionTree.from_path: allow anedge_pathargument. DeprecateContractionTree.from_edge_pathmethod in favor of this.- Speed up
ContractionTree.get_pathto ~ n log(n).
v0.7.2
- When contracting with slices and
strip_exponentenabled, each slice result is returned with the exponent separately, rather than matching the first, these are are now combined ingather_slices. - If
check_zero=True,strip_exponent=True, and a zero slice is encountered, the returned exponent will now befloat('-inf')rather than0.0for compatbility with the above.
Full Changelog: v0.7.1...v0.7.2
v0.7.1
ReusableHyperOptimizerandDiskDict, allow splitting key into subdirectory structure for better performance. Enabled for new caches by default.- High level interface functions accept the
strip_exponentkwarg, which eagerly strips a scaling exponent (log10) as the contraction proceeds, avoiding issues to do with very large or very small numeric values.
Full Changelog: v0.7.0...v0.7.1
v0.7.0
Enhancements
- Add
cmaesas anoptlibmethod, use it by default for'auto'preset if available since ih has less overhead thanoptuna. - Add
HyperOptimizer.plot_parameters_parallelfor plotting the sampled parameter space of a hyper optimizer method in parallel coordinates. - Add
nconinterface. - Add
utils.save_to_jsonandutils.load_from_jsonfor saving and loading contractions to/from json. - Add
examples/benchmarkswith various json benchmark contractions - Add
utils.networkx_graph_to_equationfor converting a networkx graph to cotengra styleinputs,outputandsize_dict. - Add
"max"as a validminimizeoption foroptimize_optimal(also added tocotengrust), which minimizes the single most expensive contraction (i.e. the cost scaling) - Add
RandomOptimizer, a fully random optimizer for testing and initialization purposes. It can be used withoptimize="random"but is not recommended for actual optimization. - Add
PathOptimizerto top-level namespace. ContractTreeCompressed.from_path: add theautocompleteoption- Add option
overwrite="improved"to reusable hyper optimizers, which always searches but only overwrites if the new tree is better, allowing easy incremental refining of a collection of trees. - einsum via bmm (
implementation="cotengra") avoids using einsum for transposing inputs. - add example {ref}
ex_extract_contractiondoc
Bug fixes
- Fix
HyperGraph.plotwhen nodes are not labelled as consecutive integers (#36) - Fix
ContractionTreeCompressed.windowed_reconfigurenot propagating the default objective - Fix
kahyparpath optimization when no edges are present (#48)
Full Changelog: v0.6.2...v0.7.0
v0.6.2
Bug fixes
- Fix final, output contractions being mistakenly marked as not tensordot-able.
- When contracting with
implementation="autoray", don't require a backend to have botheinsumandtensordot, instead fallback tocotengra's own.
Full Changelog: v0.6.1...v0.6.2
v0.6.1
What's Changed
Breaking changes
- The number of workers initialized (for non-distributed pools) is now set to, in order of preference, 1. the environment variable
COTENGRA_NUM_WORKERS, 2. the environment variableOMP_NUM_THREADS, or 3.os.cpu_count().
Enhancements
- add RandomGreedyOptimizer which is a lightweight and performant randomized greedy optimizer, eschewing both hyper parameter tuning and full contraction tree construction, making it suitable for very large contractions (10,000s of tensors+).
- add optimize_random_greedy_track_flops which runs N trials of (random) greedy path optimization, whilst computing the FLOP count simultaneously. This or its accelerated rust counterpart in
cotengrustis the driver for the above optimizer. - add
parallel="threads"backend, and make it the default forRandomGreedyOptimizerwhencotengrustis present, since its version ofoptimize_random_greedy_track_flopsreleases the GIL. - significantly improve both the speed and memory usage of
SliceFinder - alias
tree.total_cost()totree.combo_cost()
Full Changelog: v0.6.0...v0.6.1
v0.6.0
Bug fixes
- all input node legs and pre-processing steps are now calculated lazily, allowing slicing of indices including those 'simplified' away #31.
- make
tree.peak_sizemore accurate, by taking max assuming left, right and parent present at the same time.
Enhancements
- add simulated annealing tree refinement (in
path_simulated_annealing.py), based on "Multi-Tensor Contraction for XEB Verification of Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung (arXiv:2108.05665), and the "treesa" implementation in OMEinsumContractionOrders.jl by Jin-Guo Liu and Pan Zhang. This can be accessed most easily by supplyingopt = HyperOptimizer(simulated_annealing_opts={}). - add
ContractionTree.plot_flat: a new method for plotting the contraction tree as a flat diagram showing all indices on
every intermediate (without requiring any graph layouts), which is useful for visualizing and understanding small contractions.

HyperGraph.plot: support showing hyper outer indices, multi-edges, and automatic unique coloring of nodes and indices (to matchplot_flat).- add `ContractionTree.plot_circuit for plotting the contraction tree as a circuit diagram, which is fast and useful for visualizing the traversal ordering for larger trees.

- add
ContractionTree.restore_indfor 'unslicing' or 'unprojecting' previously removed indices. ContractionTree.from_path: add optioncompleteto automatically complete the tree given an incomplete path (usually disconnected subgraphs - #29).- add
ContractionTree.get_incomplete_nodesfor finding all uncontracted childless-parentless node groups. - add
ContractionTree.autocompletefor automatically completing a contraction tree, using above method. tree.plot_flat: show any preprocessing steps and optionally list sliced indices- add get_rng as a single entry point for getting or propagating a random number generator, to help determinism.
- set
autojit="auto"for contractions, which by default turns on jit forbackend="jax"only. - add
tree.describefor a various levels of information about a tree, e.g.tree.describe("full")andtree.describe("concise"). - add ctg.GreedyOptimizer and ctg.OptimalOptimizer to the top namespace.
- add ContractionTree.benchmark for for automatically assessing hardware performance vs theoretical cost.
- contraction trees now have a
get_default_objectivemethod to return the objective function they were optimized with, for simpler further refinement or scoring, where it is now picked up automatically. - change the default 'sub' optimizer on divisive partition building algorithms to be
'greedy'rather than'auto'. This might make individual trials slightly worse but makes each cheaper, see discussion: #27.
Full Changelog: v0.5.6...v0.6.0
v0.5.6
Bug fixes
- fix a very rare but very infuriating bug related somehow to ReusableHyperOptimizer not being thread-safe and returning the wrong tree, especially on github actions
Full Changelog: v0.5.5...v0.5.6