Releases: justinfx/gofileseq
Releases · justinfx/gofileseq
v2.5.0 Release
Changelog
36f805f Update to using string.Builder to reduce allocations (go >= 1.10)
ac74218 Update README with fixed badges, with links
Automated with GoReleaser
Built with go version go1.10 linux/amd64
v2.4.1 Release
Changelog
81a0abe Fix bad formatting in error msg of range test
3feffc6 Update cmds to accept compile-time Version
71385fa Upchange changelog
42f20bb Remove reference to unmaintained c++ binding (as opposed to c++ port)
Automated with GoReleaser
Built with go version go1.10 linux/amd64
v2.4.0 Release
New in v2.4.0
- Update
FindSequencesOnDiskto sort mixed frame padding into discreet sequence results - Allow strict padding length check when filtering files for pattern match in
FindSequenceOnDisk - go/cpp: Adjust path split regex to better handle range directive chars being used at the end of a base name
v2.3.1
v2.3.0
New in v2.3.0
- cpp - pure c++ port now available
Patch v2.2.3
New in v2.2.3
2.2.3
- #8 - Bug: Use deterministic resolution of the padding character in findSequencesOnDisk()
Previous releases:
2.2.2
- cmd/seqinfo - New tool for parsing and printing info about one or more file seq patterns
- Refactored vendoring into cmd/ location
2.2.1
- waf c++ build flags updated for static lib to avoid errors about -fPIC when link into another C++ lib
2.0.1
2.0.0: Merge pull request #2 from justinfx/range_refactor
Changes:
- Major refactor to the underlying logic of resolving ranges. Use an optimized storage and iteration approach, in order to
handle arbitrarily large ranges. Avoids memory allocation crashes, and very slow construction of FrameSet/FileSequence
Known Issues:
- While creating a FrameSet from a massive range like "1-10000000000x2" will be very quick, creating
FrameSets from multi-ranges with massive components like "20,50,60-10000000000" may be slow.
Improvements are still needed to the process of detecting unique values in the previously added
range components. - Invert/Normalize functions are not as optimized as they could be. While they are much faster now for the common case of source
ranges like "1-10x2,50-100x3", they are significantly slower for less common cases where the source range is a massive amount
of individual values like "1,2,3,4,...,100000"