Skip to content

Commit 90e943d

Browse files
dylanuysbenliang99
andauthored
Release 3.0.5 (#208)
* V3 (#187) * V3 * removing v2 ci pipeline * removing outdated .gitmodules * keeping the noise that sample size of 50 and slight decay of .5 in EMA provides to avoid having any one model completely dominate the subnet. * release 2.2.6 datasets, models, and lora support (#188) * deprecate stable-diffusion-inpainting * .env templates * V3/RGB (#191) * bgr images --> rgb images * proper BGR -> RGB conversion * eradicate all usage of bgr in image challenge flow * extract frames as rgb * skip extraneous rgb conversion * fix deeperforesnics consistency * v2 frame sampling parity + eidon mp4 fix * missing import * handling improper reporting of fps in wembs * correct content-type on miner side * max_fpx setting * improved video metadata extraction * cleaning up ffprobe options * fixing first frame rotation edge case * i2i fix --------- Co-authored-by: Dylan Uys <dylan@bitmind.ai> * V3 frame extraction (#192) * bgr images --> rgb images * proper BGR -> RGB conversion * eradicate all usage of bgr in image challenge flow * extract frames as rgb * skip extraneous rgb conversion * fix deeperforesnics consistency * v2 frame sampling parity + eidon mp4 fix * missing import * handling improper reporting of fps in wembs * correct content-type on miner side * max_fpx setting * improved video metadata extraction * cleaning up ffprobe options * fixing first frame rotation edge case * i2i fix * frame extraction --------- Co-authored-by: Dylan Uys <dylan@bitmind.ai> * setup.sh * removing wandb log call from generator * V3/2.2.9 (#189) * mugshot dataset * black * i2v support and fixed prompt motion enhancement * gen pipeline updates for i2v * fixing prompt indexing * properly handling new prompt dictionary key (task type) * V3/2.2.11 (#190) * mugshot dataset * black * i2v support and fixed prompt motion enhancement * gen pipeline updates for i2v * prompt sanitation + i2v model * more retries for prompt sanitation * fixing truthy tuple assertion * Update min_compute.yml * fixing setup script name in docs * correct script name * updated requirements.txt with bittensor-cli * removing wandb.off * import cleanup * miner substrate thread restart + vali autoupdate test * temporary v3 branch set to test autoudpate * autoupdate update * lower frequency of audoupdate check * autoudpate test * check autoupdate at setp 0 * typo * autoupdate test * dont set weights immediately at startup in case of many restarts * Pyproject toml (#193) * pyproject setup * executable setup.sh * autoupdate test * resetting version after autoupdate tests * Add Hugging Face model access instructions to validator docs; improve logging and fix LLM device mapping for multi-GPU - Added section to Validating.md with instructions for gaining access to required Hugging Face models (FLUX.1-dev, DeepFloyd IF). - Added logging of generation arguments in generation_pipeline.py. - Fix LLM loading for multi-GPU in prompt_generator.py: use device_map and remove .to(self.device) for quantized models. Quantized LLMs must use device_map for correct device placement; calling .to(self.device) causes device mismatch errors. Parse GPU ID from device string for device_map assignment. * fixing image_samples check for i2i * hf_xet requirement * wandb autorestart * Fix: raise error if image is None for i2i/i2v tasks and ensure image is converted from array * fixing wandb autorestart * error log * Update setup.sh to install Node.js 20.x LTS from NodeSource for pm2 compatibility; add doc note for existing validators' Hugging Face access * external port for proxy cuz tensordock rugged us (#196) * incentive doc * Typo * proxy updates * v2 parity encoding (#197) * final autoupdate test * reset version --------- Co-authored-by: Benjamin S Liang <caliangben@gmail.com> Co-authored-by: Dylan Uys <dylan@bitmind.ai> * autoupdate set to main * testing autoupdate on testnet * autoupdate enabled by default * autoudpate testnet * pointing autoupdate at main by default * removing extra state load command * setting back to 360 epoch length * burn for initial v3 release rampup * debug log typo * fixed merge to testnet * Max Frames and Timeout (#203) * fixing wandb cache clean paths (#202) * max frames configuration * fn header update * slight increase to timeout * adding extra metadata to testnet requests for miners (#201) * remove max size arg * Testnet Metadata (#204) * adding extra metadata to testnet requests for miners * adding label and mediatype to testnet metadata * Log Augmentation Parameters (#205) * log augmentation params * braindead typo * bump verison * [testnet] Release 3.0.5 (#207) * fix hotkey check in sync_metagraph * bump version --------- Co-authored-by: Benjamin S Liang <caliangben@gmail.com> Co-authored-by: Dylan Uys <dylan@bitmind.ai>
1 parent 6141a38 commit 90e943d

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
3.0.4
1+
3.0.5

bitmind/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
__version__ = "3.0.4"
1+
__version__ = "3.0.5"
22

33
version_split = __version__.split(".")
44
__spec_version__ = (

bitmind/scoring/eval_engine.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ def _update_scores(self, rewards: dict):
126126
scattered_rewards[vali_uids] = 0.0
127127
scattered_rewards[no_response_uids] = 0.0
128128
scattered_rewards[uids_array] = rewards
129-
bt.logging.debug(f"Scattered rewards: {rewards}")
129+
bt.logging.debug(f"Scattered rewards: {scattered_rewards}")
130130

131131
# Update scores with rewards produced by this step.
132132
# shape: [ metagraph.n ]
@@ -288,7 +288,7 @@ def sync_to_metagraph(self):
288288
handles clearing predictio history in `update` when a new hotkey
289289
is detected"""
290290
hotkeys = self.tracker.miner_hotkeys
291-
for uid, hotkey in enumerate(hotkeys):
291+
for uid, hotkey in hotkeys.items():
292292
if hotkey != self.metagraph.hotkeys[uid]:
293293
self.scores[uid] = 0 # hotkey has been replaced
294294
self.maybe_extend_scores(self.metagraph.n)

0 commit comments

Comments
 (0)