fix: allowing the exceptionhandler to see reconnection exceptions#7367
fix: allowing the exceptionhandler to see reconnection exceptions#7367shawkins wants to merge 1 commit intofabric8io:mainfrom
Conversation
closes: fabric8io#7164 Signed-off-by: Steve Hawkins <shawkins@redhat.com>
2162e2f to
d1c1045
Compare
| * | ||
| * @deprecated to be removed in a future release. | ||
| */ | ||
| @Deprecated |
There was a problem hiding this comment.
| @Deprecated | |
| @Deprecated(forRemoval = true) |
| * Called to reestablish the connection. Should only be called once per request. | ||
| */ | ||
| void scheduleReconnect(WatchRequestState state) { | ||
| void scheduleReconnect(WatchRequestState state, Throwable t) { |
There was a problem hiding this comment.
Might be better to keep the original, one-arg version and add the two-arg version to which it would delegate so that call sites do not need to change? Looks also cleaner than to pass null explicitly… I wish Java had a way to declare optional parameters…
|
Regarding the overall question of which approach to choose, I'm not sure… The more correct |
I understand it's now causing problems, but that's more because it was not implemented correctly - it was nearly identicaly to isRunning, which was not the intent. As for the correct behavior it's just a question of:
|
This solution would probably lead down the way of replicating the kube probe configuration 😄 |
It's a little different. Here we're talking about what behavior to exhibit when the watch fails to re-establish. Currently it's unlimited retries, but there's been two user requests to refine that behavior. In any case there's still absolutely no guarentee that restarting the application will help in these circumstances. In older versions of fabric8 there were some issues with watches not re-establishing, or cache sync issues, but I don't believe we have seen an issue along those lines in a while. |
closes: #7164
draft of changes to allow the ExceptionHandler to deal with reconnection exceptions as well.
To keep this compatible with current behavior the WatcherException is converted to a KubernetesClientException when passed into the the ExceptionHandler - by default that will mean we'll keep retrying.
Users would need to update / set their own ExceptionHandler to cause the informer to give up in this case. This is mostly inline with what was requested on #7164
Alternatives to this are:
cc @csviri @manusa @metacosm what would you like to see done here?
Description
Type of change
test, version modification, documentation, etc.)
Checklist