Skip to content

Commit a4ce5d5

Browse files
authored
Incentive Docs Update (#271)
* update docs * fix underscores in math * fix underscore bs again * cleanup * thrshold visualization
1 parent c9d2702 commit a4ce5d5

File tree

3 files changed

+180
-9
lines changed

3 files changed

+180
-9
lines changed

README.md

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,11 @@
77
<a href="docs/Mining.md">⛏️ Mining</a> ·
88
<a href="docs/Validating.md">🛡️ Validating</a> ·
99
<a href="docs/Incentive.md">💰 Incentives</a> ·
10-
<a href="https://app.bitmind.ai/statistics">🏆 Leaderboard</a>
10+
<a href="https://app.bitmind.ai/">🏆 Leaderboard</a>
1111
</p>
12-
13-
<p>
14-
<a href="https://wandb.ai/bitmindai/subnet-379-validator">📊 W&B Mainnet 34 (coming soon)</a> ·
15-
<a href="https://wandb.ai/bitmindai/subnet-379-validator">📊 W&B Testnet 379</a>
16-
</p>
17-
1812
<p>
19-
<a href="https://www.bitmind.ai/apps">🌐 Apps</a> ·
20-
<a href="https://huggingface.co/bitmind">🤗 HF</a>
13+
🤗 <a href="https://huggingface.co/gasstation">GAS-Station</a> ·
14+
<a href="https://www.bitmind.ai/apps">🌐 Apps</a>
2115
</p>
2216
</div>
2317

docs/Incentive.md

Lines changed: 177 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
# Incentive Mechanism
2+
3+
## Benchmark Runs
4+
Submitted discriminator miners are evaluated against a subset of the data sources listed below. A portion of the evaluation data comes from generative miners, who are rewarded based on their ability to submit data that both pass validator sanity checks (prompt alignment, etc.) and fool discriminators in benchmark runs.
5+
6+
<details>
7+
<summary><strong>Evaluation Datasets</strong></summary>
8+
9+
### Image Datasets
10+
11+
**Real Images:**
12+
- [drawthingsai/megalith-10m](https://huggingface.co/datasets/drawthingsai/megalith-10m)
13+
- [bitmind/bm-eidon-image](https://huggingface.co/datasets/bitmind/bm-eidon-image)
14+
- [bitmind/bm-real](https://huggingface.co/datasets/bitmind/bm-real)
15+
- [bitmind/open-image-v7-256](https://huggingface.co/datasets/bitmind/open-image-v7-256)
16+
- [bitmind/celeb-a-hq](https://huggingface.co/datasets/bitmind/celeb-a-hq)
17+
- [bitmind/ffhq-256](https://huggingface.co/datasets/bitmind/ffhq-256)
18+
- [bitmind/MS-COCO-unique-256](https://huggingface.co/datasets/bitmind/MS-COCO-unique-256)
19+
- [bitmind/AFHQ](https://huggingface.co/datasets/bitmind/AFHQ)
20+
- [bitmind/lfw](https://huggingface.co/datasets/bitmind/lfw)
21+
- [bitmind/caltech-256](https://huggingface.co/datasets/bitmind/caltech-256)
22+
- [bitmind/caltech-101](https://huggingface.co/datasets/bitmind/caltech-101)
23+
- [bitmind/dtd](https://huggingface.co/datasets/bitmind/dtd)
24+
- [bitmind/idoc-mugshots-images](https://huggingface.co/datasets/bitmind/idoc-mugshots-images)
25+
26+
**Synthetic Images:**
27+
- [bitmind/JourneyDB](https://huggingface.co/datasets/bitmind/JourneyDB)
28+
- [bitmind/GenImage_MidJourney](https://huggingface.co/datasets/bitmind/GenImage_MidJourney)
29+
- [bitmind/bm-aura-imagegen](https://huggingface.co/datasets/bitmind/bm-aura-imagegen)
30+
- [bitmind/bm-imagine](https://huggingface.co/datasets/bitmind/bm-imagine)
31+
- [Yejy53/Echo-4o-Image](https://huggingface.co/datasets/Yejy53/Echo-4o-Image)
32+
33+
**Semi-synthetic Images:**
34+
- [bitmind/face-swap](https://huggingface.co/datasets/bitmind/face-swap)
35+
36+
### Video Datasets
37+
38+
**Real Videos:**
39+
- [bitmind/bm-eidon-video](https://huggingface.co/datasets/bitmind/bm-eidon-video)
40+
- [shangxd/imagenet-vidvrd](https://huggingface.co/datasets/shangxd/imagenet-vidvrd)
41+
- [nkp37/OpenVid-1M](https://huggingface.co/datasets/nkp37/OpenVid-1M)
42+
- [facebook/PE-Video](https://huggingface.co/datasets/facebook/PE-Video)
43+
44+
**Semi-synthetic Videos:**
45+
- [bitmind/semisynthetic-video](https://huggingface.co/datasets/bitmind/semisynthetic-video)
46+
47+
**Synthetic Videos:**
48+
- [Rapidata/text-2-video-human-preferences-veo3](https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences-veo3)
49+
- [Rapidata/text-2-video-human-preferences-veo2](https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences-veo2)
50+
- [bitmind/aura-video](https://huggingface.co/datasets/bitmind/aura-video)
51+
- [bitmind/aislop-videos](https://huggingface.co/datasets/bitmind/aislop-videos)
52+
53+
</details>
54+
55+
<details>
56+
<summary><strong>Generative Models</strong></summary>
57+
58+
The following models run by validators to produce a continual, fresh stream of synthetic and semisynthetic data. The outputs of these models are uploaded at regular intervals to public datasets in the [GAS-Station](https://huggingface.co/gasstation) Hugging Face org for miner training and evaluation.
59+
60+
### Text-to-Image Models
61+
62+
- [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
63+
- [SG161222/RealVisXL_V4.0](https://huggingface.co/SG161222/RealVisXL_V4.0)
64+
- [Corcelio/mobius](https://huggingface.co/Corcelio/mobius)
65+
- [prompthero/openjourney-v4](https://huggingface.co/prompthero/openjourney-v4)
66+
- [cagliostrolab/animagine-xl-3.1](https://huggingface.co/cagliostrolab/animagine-xl-3.1)
67+
- [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) + [Kvikontent/midjourney-v6](https://huggingface.co/Kvikontent/midjourney-v6) LoRA
68+
- [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
69+
- [DeepFloyd/IF](https://huggingface.co/DeepFloyd/IF)
70+
- [deepseek-ai/Janus-Pro-7B](https://huggingface.co/deepseek-ai/Janus-Pro-7B)
71+
- [THUDM/CogView4-6B](https://huggingface.co/THUDM/CogView4-6B)
72+
73+
### Image-to-Image Models
74+
75+
- [diffusers/stable-diffusion-xl-1.0-inpainting-0.1](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1)
76+
- [Lykon/dreamshaper-8-inpainting](https://huggingface.co/Lykon/dreamshaper-8-inpainting)
77+
78+
### Text-to-Video Models
79+
80+
- [tencent/HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo)
81+
- [genmo/mochi-1-preview](https://huggingface.co/genmo/mochi-1-preview)
82+
- [THUDM/CogVideoX-5b](https://huggingface.co/THUDM/CogVideoX-5b)
83+
- [ByteDance/AnimateDiff-Lightning](https://huggingface.co/ByteDance/AnimateDiff-Lightning)
84+
- [Wan-AI/Wan2.2-TI2V-5B-Diffusers](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)
85+
86+
### Image-to-Video Models
87+
88+
- [THUDM/CogVideoX1.5-5B-I2V](https://huggingface.co/THUDM/CogVideoX1.5-5B-I2V)
89+
90+
</details>
91+
92+
93+
## Generator Rewards
94+
95+
The generator incentive mechanism combines two components: a base reward for passing data validation checks, and a multiplier based on adversarial performance against discriminators.
96+
97+
### Base Reward (Data Validation)
98+
99+
Generators receive a base reward based on their data verification pass rate:
100+
101+
$$R_{\text{base}} = p \cdot \min(n, 10)$$
102+
103+
Where:
104+
- $p$ = pass rate (proportion of generated content that passes validation)
105+
- $n$ = number of verified samples (`min(p, 10)` creates a rampup of incentive for the first 10 samples)
106+
107+
### Fool Rate Multiplier (Adversarial Performance)
108+
109+
Generators earn additional rewards by successfully fooling discriminators. The multiplier is calculated as:
110+
111+
$$M = \max(0, \min(2.0, f \cdot s))$$
112+
113+
Where:
114+
- $f$ = fool rate = $\frac{N_{\text{fooled}}}{N_{\text{fooled}} + N_{\text{not fooled}}}$
115+
- $s$ = sample size multiplier
116+
117+
The sample size multiplier encourages generators to be evaluated on more samples, similar to the sample size ramp used in the base reward.
118+
119+
$$s = \begin{cases}
120+
\max(0.5, \frac{c}{20}) & \text{if } c < 20 \\
121+
\min(2.0, 1.0 + \ln(\frac{c}{20})) & \text{if } c \geq 20
122+
\end{cases}$$
123+
124+
Where:
125+
- $c$ = total evaluation count (fooled + not fooled)
126+
- Reference count of 20 gives multiplier of 1.0
127+
- Sample sizes below 20 are penalized
128+
- Sample sizes above 20 receive logarithmic bonus up to 2.0x
129+
130+
### Final Generator Reward
131+
132+
The total generator reward combines both components:
133+
134+
$$R_{\text{total}} = R_{\text{base}} \cdot M$$
135+
136+
This design incentivizes generators to:
137+
1. Produce high-quality, valid content (base reward)
138+
2. Create adversarially robust content that can fool discriminators (multiplier)
139+
3. Participate in more evaluations for sample size bonuses
140+
141+
142+
143+
## Discriminator Rewards
144+
145+
The discriminator incentive mechanism uses a winner-take-all approach with a dynamic threshold system that gradually decays over time. This ensures that only the best-performing discriminators receive rewards while maintaining competition over extended periods. Currently, scores are mean of image and video multiclass MCCs. In the future, this may be broken out into individual image and video thresholds.
146+
147+
### Threshold Function
148+
149+
The threshold function $T(x)$ is defined as:
150+
151+
$$T(x) = \max \left( S + \varepsilon, (S + \text{boost}) e^{-kx} \right)$$
152+
153+
Where:
154+
- $S$ = new leader's score (e.g., 0.87)
155+
- $\varepsilon$ = floor margin (use 0.01 $\Rightarrow$ floor = $S + 0.01$)
156+
- $\text{boost} = \min(\text{cap}, g \cdot \Delta)$, with $\Delta = S - S_{\text{prev}}$
157+
- pick $g = 2.5$ and $\text{cap} = 0.05$
158+
- (so a +0.02 improvement $\Rightarrow 2.5 \times 0.02 = 0.05 \Rightarrow$ full 5-point boost)
159+
- We choose $k$ by duration (lands exactly on the floor at $H$ epochs):
160+
161+
$$k = \frac{1}{H} \ln \left( \frac{S + \text{boost}}{S + \varepsilon} \right)$$
162+
163+
- with $H = 140$ epochs (~1 week)
164+
165+
### Example
166+
167+
Using the scenario from the threshold calculation:
168+
- $S_{\text{prev}} = 0.85$, $S = 0.87$, $\Delta = 0.02$
169+
- $\text{boost} = \min(0.05, 2.5 \times 0.02) = 0.05 \Rightarrow$ initial $T(0) = 0.92$
170+
- $\varepsilon = 0.01 \Rightarrow$ floor $= 0.88$
171+
- $k = \frac{1}{140} \ln(0.92/0.88) \approx 3.17 \times 10^{-4}$
172+
- Then $T(x)$ decays smoothly: $\sim 0.900$ around 70 epochs, and clamps to 0.88 at 140.
173+
174+
175+
The following plot illustrates how the threshold function decays over time using the example parameters above:
176+
177+
![Threshold Decay Function](static/threshold_decay.png)

docs/static/threshold_decay.png

317 KB
Loading

0 commit comments

Comments
 (0)