Updates gazette_shard_read_head tracking#309
Updates gazette_shard_read_head tracking#309michaelschiff wants to merge 4 commits intogazette:masterfrom
Conversation
…king of this gauge value in consumer/transaction.go - Instead, exporting this metric via the Collector implemented in consumer/resolver.go means that this stats life-cycle on the process matches that of the shard.
jgraettinger
left a comment
There was a problem hiding this comment.
Reviewed 2 of 3 files at r1, all commit messages.
Reviewable status: 2 of 3 files reviewed, 3 unresolved discussions (waiting on @jgraettinger and @michaelschiff)
consumer/interfaces.go, line 304 at r1 (raw file):
[]string{"shard", "status"}, nil) shardReadHeadDesc = prometheus.NewDesc(
nit: go fmt? different spacing here.
consumer/resolver.go, line 394 at r1 (raw file):
// Collect implements prometheus.Collector func (r *Resolver) Collect(ch chan<- prometheus.Metric) { r.state.KS.Mu.RLock()
I'm a little concerned about the scope of this lock. Particularly if the reader of this channel is popping off metrics and streaming them into an http response (which is probably how I would do it). In that case, a slow or broken metrics client peer could cause the lock to be held indefinitely.
I'm thinking this should allocate into an intermediate state it intends to send, and then actually send that state to |ch| without holding any locks.
consumer/resolver.go, line 405 at r1 (raw file):
status.Code.String()) for j,o := range shard.progress.readThrough {
You'll need guard against a concurrent update from the completion of a consumer transaction. Call shard.Progress() to get an owned copy?
…void writing out to metrics collection client while holding lock
Moves into client/resolver so that we can manage the life-cycle of the gauge with the shard it is measuring
This change is