Add topologySpreadConstraints configuration to pod spec.#2530
Add topologySpreadConstraints configuration to pod spec.#2530laiminhtrung1997 wants to merge 15 commits intozalando:masterfrom
Conversation
b948c94 to
44fcb1f
Compare
|
We need that feature too. |
| } | ||
| if !reflect.DeepEqual(c.Statefulset.Spec.Template.Spec.TopologySpreadConstraints, statefulSet.Spec.Template.Spec.TopologySpreadConstraints) { | ||
| needsReplace = true | ||
| needsRollUpdate = true |
There was a problem hiding this comment.
does this really need to trigger a rolling update of pods executed by operator? Will not K8s take care of it then once the statefulset is replaced?
There was a problem hiding this comment.
I see, but is this wrong too?
https://github.com/zalando/postgres-operator/blob/master/pkg/cluster/cluster.go#L472
There was a problem hiding this comment.
hm good point. Maybe we can leave as is for now. With rolling update we make sure pods immediately adhere the new constraints.
| InitContainersOld []v1.Container `json:"init_containers,omitempty"` | ||
| PodPriorityClassNameOld string `json:"pod_priority_class_name,omitempty"` | ||
|
|
||
| TopologySpreadConstraints []v1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` |
There was a problem hiding this comment.
pkg/cluster/k8sres.go
Outdated
| false, | ||
| "", | ||
| false, | ||
| []v1.TopologySpreadConstraint{}, |
There was a problem hiding this comment.
atm we reuse configured tolerations also for logical backup so I guess we can do the same with constraints
There was a problem hiding this comment.
Ok I got it.
pkg/cluster/k8sres.go
Outdated
| return podAntiAffinity | ||
| } | ||
|
|
||
| func generateTopologySpreadConstraints(labels labels.Set, topologySpreadConstraintObjs []v1.TopologySpreadConstraint) []v1.TopologySpreadConstraint { |
There was a problem hiding this comment.
would like to see a unit test for this function :)
There was a problem hiding this comment.
I added a unit test for this function.
|
Can you also write an e2e test that tests that the constraints work as expected, please? |
530f847 to
18023cb
Compare
Dear @FxKu |
fa6dadd to
fb8fb84
Compare
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
fb8fb84 to
c6f58ef
Compare
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
1 similar comment
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
ebcd892 to
1312388
Compare
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
2 similar comments
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
bb47325 to
b824a6f
Compare
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
Dear all,
I think we should configure topologySpreadConstraints to pod spec so these pods can spread zones for high availability.
Could someone review it, please? Thank you very much.
Best regards.