Skip to content

Commit 6141a38

Browse files
dylanuysbenliang99
andauthored
Release 3.0.4 (#206)
* V3 (#187) * V3 * removing v2 ci pipeline * removing outdated .gitmodules * keeping the noise that sample size of 50 and slight decay of .5 in EMA provides to avoid having any one model completely dominate the subnet. * release 2.2.6 datasets, models, and lora support (#188) * deprecate stable-diffusion-inpainting * .env templates * V3/RGB (#191) * bgr images --> rgb images * proper BGR -> RGB conversion * eradicate all usage of bgr in image challenge flow * extract frames as rgb * skip extraneous rgb conversion * fix deeperforesnics consistency * v2 frame sampling parity + eidon mp4 fix * missing import * handling improper reporting of fps in wembs * correct content-type on miner side * max_fpx setting * improved video metadata extraction * cleaning up ffprobe options * fixing first frame rotation edge case * i2i fix --------- Co-authored-by: Dylan Uys <dylan@bitmind.ai> * V3 frame extraction (#192) * bgr images --> rgb images * proper BGR -> RGB conversion * eradicate all usage of bgr in image challenge flow * extract frames as rgb * skip extraneous rgb conversion * fix deeperforesnics consistency * v2 frame sampling parity + eidon mp4 fix * missing import * handling improper reporting of fps in wembs * correct content-type on miner side * max_fpx setting * improved video metadata extraction * cleaning up ffprobe options * fixing first frame rotation edge case * i2i fix * frame extraction --------- Co-authored-by: Dylan Uys <dylan@bitmind.ai> * setup.sh * removing wandb log call from generator * V3/2.2.9 (#189) * mugshot dataset * black * i2v support and fixed prompt motion enhancement * gen pipeline updates for i2v * fixing prompt indexing * properly handling new prompt dictionary key (task type) * V3/2.2.11 (#190) * mugshot dataset * black * i2v support and fixed prompt motion enhancement * gen pipeline updates for i2v * prompt sanitation + i2v model * more retries for prompt sanitation * fixing truthy tuple assertion * Update min_compute.yml * fixing setup script name in docs * correct script name * updated requirements.txt with bittensor-cli * removing wandb.off * import cleanup * miner substrate thread restart + vali autoupdate test * temporary v3 branch set to test autoudpate * autoupdate update * lower frequency of audoupdate check * autoudpate test * check autoupdate at setp 0 * typo * autoupdate test * dont set weights immediately at startup in case of many restarts * Pyproject toml (#193) * pyproject setup * executable setup.sh * autoupdate test * resetting version after autoupdate tests * Add Hugging Face model access instructions to validator docs; improve logging and fix LLM device mapping for multi-GPU - Added section to Validating.md with instructions for gaining access to required Hugging Face models (FLUX.1-dev, DeepFloyd IF). - Added logging of generation arguments in generation_pipeline.py. - Fix LLM loading for multi-GPU in prompt_generator.py: use device_map and remove .to(self.device) for quantized models. Quantized LLMs must use device_map for correct device placement; calling .to(self.device) causes device mismatch errors. Parse GPU ID from device string for device_map assignment. * fixing image_samples check for i2i * hf_xet requirement * wandb autorestart * Fix: raise error if image is None for i2i/i2v tasks and ensure image is converted from array * fixing wandb autorestart * error log * Update setup.sh to install Node.js 20.x LTS from NodeSource for pm2 compatibility; add doc note for existing validators' Hugging Face access * external port for proxy cuz tensordock rugged us (#196) * incentive doc * Typo * proxy updates * v2 parity encoding (#197) * final autoupdate test * reset version --------- Co-authored-by: Benjamin S Liang <caliangben@gmail.com> Co-authored-by: Dylan Uys <dylan@bitmind.ai> * autoupdate set to main * testing autoupdate on testnet * autoupdate enabled by default * autoudpate testnet * pointing autoupdate at main by default * removing extra state load command * setting back to 360 epoch length * burn for initial v3 release rampup * debug log typo * fixed merge to testnet * Max Frames and Timeout (#203) * fixing wandb cache clean paths (#202) * max frames configuration * fn header update * slight increase to timeout * adding extra metadata to testnet requests for miners (#201) * remove max size arg * Testnet Metadata (#204) * adding extra metadata to testnet requests for miners * adding label and mediatype to testnet metadata * Log Augmentation Parameters (#205) * log augmentation params * braindead typo * bump verison --------- Co-authored-by: Benjamin S Liang <caliangben@gmail.com> Co-authored-by: Dylan Uys <dylan@bitmind.ai>
1 parent bd7c4e2 commit 6141a38

File tree

7 files changed

+60
-11
lines changed

7 files changed

+60
-11
lines changed

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
3.0.3
1+
3.0.4

bitmind/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
__version__ = "3.0.3"
1+
__version__ = "3.0.4"
22

33
version_split = __version__.split(".")
44
__spec_version__ = (

bitmind/cache/sampler/video_sampler.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ async def sample(
3939
remove_from_cache: bool = False,
4040
min_duration: float = 1.0,
4141
max_duration: float = 6.0,
42+
max_frames: int = 144,
4243
) -> Dict[str, Any]:
4344
"""
4445
Sample random video segments from the cache.
@@ -71,6 +72,7 @@ async def sample(
7172
files=cached_files,
7273
min_duration=min_duration,
7374
max_duration=max_duration,
75+
max_frames=max_frames,
7476
remove_from_cache=remove_from_cache,
7577
)
7678

@@ -85,6 +87,7 @@ async def _sample_frames(
8587
min_duration: float = 1.0,
8688
max_duration: float = 6.0,
8789
max_fps: float = 30.0,
90+
max_frames: int = 144,
8891
remove_from_cache: bool = False,
8992
as_float32: bool = False,
9093
channels_first: bool = False,
@@ -94,8 +97,11 @@ async def _sample_frames(
9497
Sample a random video segment and return it as a numpy array.
9598
9699
Args:
100+
files: Dict mapping source names to lists of video file paths
97101
min_duration: Minimum duration of video segment to extract in seconds
98102
max_duration: Maximum duration of video segment to extract in seconds
103+
max_fps: Maximum frame rate to use when sampling frames
104+
max_frames: Maximum number of frames to extract
99105
remove_from_cache: Whether to remove the source video from cache
100106
as_float32: Whether to return frames as float32 (0-1) instead of uint8 (0-255)
101107
channels_first: Whether to return frames with channels first (TCHW) instead of channels last (THWC)
@@ -155,6 +161,7 @@ async def _sample_frames(
155161
target_duration = min(target_duration, total_duration)
156162

157163
num_frames = int(target_duration * frame_rate) + 1
164+
num_frames = min(num_frames, max_frames)
158165

159166
actual_duration = (num_frames - 1) / frame_rate
160167

bitmind/config.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ def add_validator_args(parser):
157157
"--neuron.miner-total-timeout",
158158
type=float,
159159
help="Total timeout for miner requests in seconds",
160-
default=9.0,
160+
default=11.0,
161161
)
162162

163163
parser.add_argument(
@@ -300,6 +300,13 @@ def add_validator_args(parser):
300300
default=6.0,
301301
)
302302

303+
parser.add_argument(
304+
"--challenge.max-frames",
305+
type=int,
306+
help="Maximum number of video frames to sample for a challenge",
307+
default=144,
308+
)
309+
303310

304311
def add_data_generator_args(parser):
305312
parser.add_argument(

bitmind/epistula.py

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,7 @@ async def query_miner(
101101
total_timeout: float,
102102
connect_timeout: Optional[float] = None,
103103
sock_connect_timeout: Optional[float] = None,
104+
testnet_metadata: dict = None,
104105
) -> Dict[str, Any]:
105106
"""
106107
Query a miner with media data.
@@ -130,16 +131,22 @@ async def query_miner(
130131

131132
try:
132133

133-
headers = generate_header(hotkey, media, axon_info.hotkey)
134134
url = f"http://{axon_info.ip}:{axon_info.port}/detect_{modality}"
135+
headers = generate_header(hotkey, media, axon_info.hotkey)
136+
137+
headers = {
138+
"Content-Type": content_type,
139+
"X-Media-Type": modality,
140+
**headers,
141+
}
142+
143+
if testnet_metadata:
144+
testnet_headers = {f"X-Testnet-{k}": str(v) for k, v in testnet_metadata.items()}
145+
headers.update(testnet_headers)
135146

136147
async with session.post(
137148
url,
138-
headers={
139-
"Content-Type": content_type,
140-
"X-Media-Type": modality,
141-
**headers,
142-
},
149+
headers=headers,
143150
data=media,
144151
timeout=aiohttp.ClientTimeout(
145152
total=total_timeout,

neurons/miner.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,16 @@ def detect(self, media_tensor, modality):
140140
return probs
141141

142142

143+
def extract_testnet_metadata(headers):
144+
headers = dict(headers)
145+
testnet_metadata = {}
146+
for key, value in headers.items():
147+
if key.lower().startswith("x-testnet-"):
148+
metadata_key = key[len("x-testnet-") :].lower()
149+
testnet_metadata[metadata_key] = value
150+
return testnet_metadata
151+
152+
143153
class Miner(BaseNeuron):
144154
neuron_type = NeuronType.MINER
145155
fast_api: FastAPIThreadedServer
@@ -189,6 +199,10 @@ async def detect_image(self, request: Request):
189199
f"Unexpected content type: {content_type}, expected image/jpeg"
190200
)
191201

202+
testnet_metdata = extract_testnet_metadata(request.headers)
203+
if len(testnet_metdata) > 0:
204+
bt.logging.info(json.dumps(testnet_metdata, indent=2))
205+
192206
try:
193207
image_array = np.array(Image.open(io.BytesIO(image_data)))
194208
image_tensor = torch.from_numpy(image_array).permute(2, 0, 1)
@@ -213,6 +227,11 @@ async def detect_video(self, request: Request):
213227
bt.logging.warning(
214228
f"Unexpected content type: {content_type}, expected video/mp4 or video/mpeg"
215229
)
230+
231+
testnet_metdata = extract_testnet_metadata(request.headers)
232+
if len(testnet_metdata) > 0:
233+
bt.logging.info(json.dumps(testnet_metdata, indent=2))
234+
216235
try:
217236
with tempfile.NamedTemporaryFile(suffix=".mp4") as temp_file:
218237
temp_path = temp_file.name

neurons/validator.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def init(self):
8787
self.send_challenge_to_miners_on_interval,
8888
self.update_compressed_cache_on_interval,
8989
self.update_media_cache_on_interval,
90-
self.start_new_wanbd_run_on_interval
90+
self.start_new_wanbd_run_on_interval,
9191
]
9292
)
9393

@@ -218,6 +218,11 @@ async def send_challenge_to_miners_on_interval(self, block):
218218
self.config.neuron.miner_total_timeout,
219219
self.config.neuron.miner_connect_timeout,
220220
self.config.neuron.miner_sock_connect_timeout,
221+
testnet_metadata=(
222+
{k: v for k, v in media_sample.items() if k != modality}
223+
if self.config.netuid != MAINNET_UID
224+
else {}
225+
),
221226
)
222227
)
223228
if len(challenge_tasks) != 0:
@@ -357,6 +362,7 @@ async def _sample_media(self) -> Optional[Dict[str, Any]]:
357362
kwargs = {
358363
"min_duration": self.config.challenge.min_clip_duration,
359364
"max_duration": self.config.challenge.max_clip_duration,
365+
"max_frames": self.config.challenge.max_frames,
360366
}
361367

362368
try:
@@ -396,7 +402,7 @@ async def _sample_media(self) -> Optional[Dict[str, Any]]:
396402

397403
if sample and sample.get(modality) is not None:
398404
bt.logging.debug("Augmenting Media")
399-
augmented_media, _, _ = apply_random_augmentations(
405+
augmented_media, aug_level, aug_params = apply_random_augmentations(
400406
sample.get(modality),
401407
(256, 256),
402408
sample.get("mask_center", None),
@@ -407,6 +413,9 @@ async def _sample_media(self) -> Optional[Dict[str, Any]]:
407413
"modality": modality,
408414
"media_type": media_type,
409415
"label": MediaType(media_type).int_value,
416+
"metadata": sample.get("metadata", {}),
417+
"augmentation_level": aug_level,
418+
"augmentation_params": aug_params
410419
}
411420
)
412421
return sample

0 commit comments

Comments
 (0)