Releases: BitMind-AI/bitmind-subnet
Release 4.0.11
New Model
WAN2.2 TI2V: https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers
New Datasets
AI Generated videos from various tools: https://huggingface.co/datasets/bitmind/aislop-videos
GPT-4o images: https://huggingface.co/datasets/Yejy53/Echo-4o-Image
Release 4.0.10
- Nano banana openrouter integration
- Bittensor 9.9.0 upgrade
Release 4.0.9
New Datasets
drawthingsai/megalith-10m
facebook/PE-Video
Rapidata/text-2-video-human-preferences-veo2
Rapidata/text-2-video-human-preferences-veo3
bitmind/bm-imagine
bitmind/aura-video
Other Changes
- Support for new dataset formats in partial dataset download flow
- Previously: parquet files containing images, zip files containing videos
- Now: zips, tar, and parquet files containing either images or videos, as well as direct video and image file downloads.
- Made temp dir location configurable to avoid storage issues when using a small container volume and a large NAS volume.
Release 4.0.8
Replacing synchronous substrate interface and Threading with AsyncSubstrateInterface and asyncio only.
This gives us:
- A more stable connection to the blockchain (no more big ugly red json parse errors from the dead ws connection)
- Simpler locking mechanism (asyncio.Lock) since we're no longer using Threads
- Better use of resources - single shared event loop rather than new threads with their own event loops
Release 4.0.5
- temporarily increasing timeout and decreasing challenge frequency
- centralized cache deletion logic to keep filesystem and prompt db in sync
- added python dependency installation options to gascli.
gascli install --py-depsinstalls new python dependenciesgascli install --sys-depsinstalls new system dependenciesgascli installreruns full installation
The autoupdate process now calls gascli install commands to ensure dependencies stay up to date. This is in preparation for next release, which will feature async-substrate-interface
Release 4.0.4
Model Entrance Exam + Local Benchmarking
We've added a model "entrance exam" that runs automatically on every new model submission. The entrance exam, which acts as a gatekeeper for more in-depth model evaluation, tests each submitted against the latest BitMind image/video benchmarks generated by the subnet. Models recieve a pass/fail along with metrics so miners know where they stand.
You can also now reproduce the exact entrance exam locally with gascli!
Run image benchmark: gascli miner benchmark --image-model /path/image.onnx
Run video benchmark: gascli miner benchmark --video-model /path/video.onnx
Useful options:
-v | -vv | -vvv: increase logging
--stream-images: stream images instead of downloading
--prune-old-data: remove older cached splits/configs after fetching the latest
Release 4.0.3
Key Change:
- Including media metadata in orchestrator request to facilitate hf uploads for convenient local miner eval.
- Fields:
label: 0, 1, 2 for real, synthetic, semisynthetic
source_type: generated, dataset, scraped
source_name: model name, datset name, or download url
Other Changes:
- Enum and field mappings for
source_type, updated usage for consistency with MediaType and Modality enum usage. - Safer atomic write for validator state saving
- Prune prompts when their source media gets deleted rather than based on usage. Once a prompt's associated source media is pruned, it can no longer be used for inpainting, so it will no longer be sampled, so they're a waste of space.
- Also includes logic to clear out orphaned prompts, which will be obviated & removed after this release.
Release 4.0.2
- Stabilizing orchestrator: increased request timeout to 90s, decreased challenge frequency to 2m towards the same end
- Improving logs structure for dashboard consumption: outer keys are now:
label: integer label formedia_type(0 = real, 1 = synthetic, 2 = semisynthetic)results: list of dictionaries, each of which contain miner responsesmedia_metadata: describes the nature of the data used in the associated challenge.augmentation_paramscontains the randomly selected parameters used in augmenting the media described inmedia_metadata
Release 4.0.0
Generative Adversarial Subnet
Release 3.3.1
- Dynamically limit number of frames based on resolution to avoid timeouts due to large payloads.
- Currently limited to 50M pixels per request (e.g. 24 frames at 54 frames at 1280x720, 190 frames at 512x512, etc).
- Future optimizations will allow for more frames to be sent per request.