You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Both the Sandbox and SandboxClaim CRDs remain "Ready" indefinitely after a pod is deleted when a node is drained manually (kubectl drain command on the node) even when the pod is rescheduled.
The bug is only observed when using a WarmPool, not when controller directly creates a new pod.
Steps to reproduce
Create a SandboxTemplate and SandboxWarmPool with replicas: 1
Create a SandboxClaim, make sure it adopts the pod from WarmPool
Get the adopted pod & node names
ADOPTED_POD_NAME=$(kubectl get sandbox <warmpool> -n default -o jsonpath='{.metadata.annotations.agents\.x-k8s\.io/pod-name}')
NODE_TO_DRAIN=$(kubectl get pod $ADOPTED_POD_NAME -n default -o jsonpath='{.spec.nodeName}')
Drain the node kubectl drain $NODE_TO_DRAIN --ignore-daemonsets --delete-emptydir-data --force
Environment
GKE Autopilot cluster
agent-sandbox controller: installed via kubectl apply from the repo, v0.1.0