Skip to content

Bug: kubectl drain deletes pod but SandboxClaim status stays ready (WarmPool only) #275

@raaguln

Description

@raaguln

Issue

  • Both the Sandbox and SandboxClaim CRDs remain "Ready" indefinitely after a pod is deleted when a node is drained manually (kubectl drain command on the node) even when the pod is rescheduled.
  • The bug is only observed when using a WarmPool, not when controller directly creates a new pod.

Steps to reproduce

  1. Create a SandboxTemplate and SandboxWarmPool with replicas: 1
  2. Create a SandboxClaim, make sure it adopts the pod from WarmPool
  3. Get the adopted pod & node names
ADOPTED_POD_NAME=$(kubectl get sandbox <warmpool> -n default -o jsonpath='{.metadata.annotations.agents\.x-k8s\.io/pod-name}')
NODE_TO_DRAIN=$(kubectl get pod $ADOPTED_POD_NAME -n default -o jsonpath='{.spec.nodeName}')
  1. Drain the node kubectl drain $NODE_TO_DRAIN --ignore-daemonsets --delete-emptydir-data --force

Environment

  • GKE Autopilot cluster
  • agent-sandbox controller: installed via kubectl apply from the repo, v0.1.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions