-
-
Notifications
You must be signed in to change notification settings - Fork 258
Description
Issue submitter TODO list
- I've looked up my issue in FAQ
- I've searched for an already existing issues here
- I've tried running
main-labeled docker image and the issue still persists there - I'm running a supported version of the application which is listed here
Describe the bug (actual behavior)
Description
When loading topic messages with a cursor (e.g. "Next page"), if the cursor has limit = 0 (or messagesPerPage is 0 for any other reason), the polling logic builds an empty range (from, from). The next range is also (from, from), so the consumer never advances and keeps re-reading the same data until the request is cancelled or times out.
Observed impact:
- One request ran for ~73 minutes (4,382,435 ms), read 202 GB and 433,048,747 messages consumed, then completed (e.g. by client/server timeout or connection close).
- The UI showed "No messages found" — zero messages were returned to the user.
- The topic has on the order of ~1M messages; 433M consumed means the same data was effectively read hundreds of times in a single request.
So the bug causes both unbounded load (hundreds of GB, hundreds of millions of records) and a useless result (no messages).
Actual behavior
- When
messagesPerPage == 0,msgsToPollPerPartitionbecomes 0. - The range is built as
(fromOffset, fromOffset + 0)→(from, from)(empty range). - In the inner loop, only records with
offset < fromTo.toare added to the result; withto == from, no records satisfy this, so result is always empty. - The client receives zero MESSAGE events → UI shows "No messages found".
- Every iteration still calls
poll(); all those records are counted in "messages consumed" and bytes. nextPollingRangereturns the same(from, from)again, so the loop never advances and keeps re-reading the same data until the request is cancelled or times out (e.g. after tens of minutes).- After the request ends, the UI shows very large consumption stats (e.g. 433M messages, 202 GB) and an empty message list.
Root cause
-
ForwardEmitter / BackwardEmitter
msgsToPollPerPartitionis computed as
(int) Math.ceil((double) messagesPerPage / readFromOffsets.size()).
WhenmessagesPerPage == 0, this is 0, so the range becomes(fromOffset, fromOffset). The consumer seeks tofrom, reads batches, but no record hasoffset < from, so the result list stays empty and no messages are sent to the client. The next range is again(from, from), so the loop never moves forward. -
MessagesService
When loading by cursor,cursor.limit()is passed directly intoloadMessageswithout normalization. So a cursor stored withlimit = 0(or any path that passes 0) leads tomessagesPerPage = 0in the emitter and triggers the empty range and the re-read loop.
Suggested fix
-
Guarantee non-zero step in emitters
InForwardEmitterandBackwardEmitter, ensure the per-partition step is at least 1, e.g.
int msgsToPollPerPartition = Math.max(1, (int) Math.ceil((double) messagesPerPage / readFromOffsets.size()));
(and the same forreadToOffsets.size()in BackwardEmitter). This prevents the range from ever being(from, from). -
Normalize limit when loading by cursor
InMessagesService.loadMessages(KafkaCluster, String, String cursorId), pass a normalized limit (e.g. via the same logic asfixPageSize(cursor.limit())) so that 0 is never passed to the emitter. -
Optional safeguard
In the polling loop, exit if total messages read exceeds the topic size (e.g.seekOperations.summaryOffsetsRange() * (1 + margin)) to avoid unbounded re-reading even if another bug produces a similar pattern.
Affected code
api/src/main/java/io/kafbat/ui/emitter/ForwardEmitter.java(lines ~47, 50–52)api/src/main/java/io/kafbat/ui/emitter/BackwardEmitter.java(analogous)api/src/main/java/io/kafbat/ui/service/MessagesService.java(loadMessages with cursorId, ~235–245)
Environment
- Reproduced with a topic on the order of 1M messages, multiple brokers; request completed after ~73 minutes with 433M messages consumed, 202 GB read, and "No messages found" in the UI.
Expected behavior
- Polling advances by a non-empty range each time and stops when the end of the topic is reached or the page limit is satisfied.
- A single request does not re-read the same offsets hundreds of times.
- If there are messages in the topic (and they match the filter), at least some messages are returned; if none match, the request still finishes after at most one pass over the topic.
Your installation details
8b5494b
24.07.2025, 14:40:05
Steps to reproduce
- Open topic messages in Kafka UI and run a search with a filter.
- Trigger loading the next page via cursor (e.g. "Next page" or any flow that uses the stored cursor).
- If the cursor was stored with
limit = 0(or 0 is passed as page size somewhere), the backend enters the infinite re-read loop. - Let the request run until it ends (timeout or disconnect). Observe: very high "messages consumed" and "bytes consumed", and "No messages found" in the message list.
Screenshots
No response
Logs
No response
Additional context
No response