Network Topology #73
MatheusFranco99
started this conversation in
SIP Discussions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Network Topology pre-SIP Discussion
Table of Contents
GU)GUGS)Summary
This SIP proposes a new method for assigning committees to topics, aiming to reduce the number of irrelevant (non-committee) messages operators must process.
Motivation
Currently, committee-to-topic assignment is random, using a hash of committee operators modulo 128:
This randomness forces operators active in many committees to listen to multiple topics and process many irrelevant messages, which limits scalability.
Goal
Substitute for a model that groups committees that share operators into the same topics, so operators process fewer non-committee messages.
Key Questions
Solution Classes
We classify models along three classes:
Proposed Models
1. Greedy Algorithm
This stateful algorithm minimizes assignment cost using an objective function. Let:
The cost to add a committee is defined by the aggregation cost:
This represents:
Initialization:
Event Handling:
Properties: Stateful, stable, history-dependent.
2. MaxReach
This stateful model tracks each operator’s participation count in active committees.
In order to combine well-connected committee groups, a committee is assigned to a topic based on the operator with the highest reach (most committees).
The operator’s ID is hashed and mapped modulo 128.
Properties: Stateful, unstable, history-independent.
Note: It's unstable because adding a committee can change operator counts, affecting assignments of other committees. However, it shouldn't be very common.
3. LowestID
This stateless model simply assigns a committee to the topic corresponding to its operator with the lowest ID.
Properties: Stateless, (and thus) stable, history-independent.
Analysis
Q1: Performance
We benchmarked models using current Mainnet data (~80k validators, 1k operators, 647 committees) and simulated scaling up to 8x (both in number of validators and operators).




The cryptography cost is measured for each operator and it takes into consideration the RSA cost of non-committee messages and the RSA + BLS cost of committee messages. Similarly, the message rate is measured for each operator taking into consideration all messages per epoch listened in all of its topics.
Observations:
Q2: Viability
For Greedy:
Greedy is the most promising model.
Thus, we further evaluate it under continuous network updates to understand the necessity of re-initialization.
Greedy: Event Processing Performance
We compare:
Greedy full init: full re-initialization on each event.G-YY[(X)]: initialize on date20YY-12*X-01and then process events incrementally.Validator counts over selected dates:
Results show that
G-23,G-24, andG-24(½)significantly diverge fromGreedy full initbenchmark.In contrast,
G-24(¾)andG-25closely match its performance, suggesting that initializing from a state nearer to the present date is better.Greedy With Updates (
GU)We further improve event processing by allowing committees to update their topic-assignments:
This method preserves algorithm stability since it only reassigns topics for the committee affected by the event.
The complexity of the event processing operation changed from$O(1)$ to $O(T)$ since a comparison is made to each topic.
The
GU-24(½)variant comes closer to theGreedy full init.More notably,
GU-24(¾)andGU-25outperform even the benchmark in the 50k–60k validator range.This indicates that
GUis a viable Greedy variant: it avoids costly full re-initializations while maintaining — and even improving — performance.Scalability of
GUTo validate scalability, we replicated the network up to 7× its original size and observed the results:
Key findings:
GU-24(½)diverged, bothGU-24(¾)andGU-25consistently outperformed the benchmark as the network scaled.GU-25might seem counterintuitive, but they highlight that default Greedy is only locally optimal. In contrast,GUreassesses committee-topic assignments more frequently, uncovering better local optima.To better reflect future network growth, we scaled only the recent growth pattern observed between 60k and 90k validators:



Observations:
GU-25consistently maintained better results, thoughGU-24(¾)remained close to the full initialization baseline.Greedy With Split (
GS)In order to avoid the dependence on the initialization date, we increase the disturbance in the event processing functions attempting to let the model find a better local optima.
The split operation is described in terms of the aggregation operation:
Agg(c,t)for a committee and a topic, but, here, we extrapolate the definition for two topics,This change turns the algorithm to be unstable since the split operation may change the topic of other committees
The complexity of the event processing becomes$O(t_c \times log , t_c + T)$ due to the split operation and the comparison per topic.
GGUGSGGUGSGGUGSGGUGSFinal Evaluation
When considering both performance and viability:
Overall, Greedy emerges as the top choice because:
Important note: this scalability gain applies when the network grows in both validators and operators. If the validator count alone increases, the operational cost on operators rises linearly — and no topology model can mitigate that inherent scaling cost.
Beta Was this translation helpful? Give feedback.
All reactions