Skip to content

Only first init container image replaced — other init container and main container ignored even if pre-cached #522

@PaulHausmann

Description

@PaulHausmann

Summary

When running a Pod with multiple init containers and one main container, Kube Image Keeper only replaces the image of the first init container.
The second init container and the main container are not replaced, even though both have been manually pre-cached (pre-heated) and are pullable from the cache.

Environment

Kube Image Keeper version 1.13.1
Kubernetes version 1.29.0

values.yaml

## images used

# Default values for kube-image-keeper.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# -- Delay in days before deleting an unused CachedImage
cachedImagesExpiryDelay: 30
# -- List of architectures to put in cache
architectures: [amd64]
# -- Insecure registries to allow to cache and proxify images from
insecureRegistries:
  - gitlab-registry.x.y
  - https://gitlab-prod.x.y
# -- Root certificate authorities to trust
rootCertificateAuthorities: {}
  # secretName: some-secret
  # keys: []

controllers:
  # Maximum number of CachedImages that can be handled and reconciled at the same time (put or remove from cache)
  maxConcurrentCachedImageReconciles: 3
  # -- Number of controllers
  replicas: 4
  image:
    # -- Controller image repository. Also available: `quay.io/enix/kube-image-keeper`
    repository: gitlab-registry.x.y/shared-resources/cicd.cached-images/ghcr.io-enix-kube-image-keeper
    # -- Controller image pull policy
    pullPolicy: IfNotPresent
    # -- Controller image tag. Default chart appVersion
    tag: "1.13.1"
  # -- Controller logging verbosity
  verbosity: INFO
  # -- Specify secrets to be used when pulling controller image
  imagePullSecrets: []
  # -- Annotations to add to the controller pod
  podAnnotations: {}
  # -- Security context for the controller pod
  podSecurityContext: {}
  # -- Security context for containers of the controller pod
  securityContext: {}
  # -- Node selector for the controller pod
  nodeSelector: {}
  # -- Toleration for the controller pod
  tolerations: []
  # -- Set the PriorityClassName for the controller pod
  priorityClassName: ""
  pdb:
    # -- Create a PodDisruptionBudget for the controllers
    create: false
    # -- Minimum available pods
    minAvailable: 1
    # -- Maximum unavailable pods
    maxUnavailable: ""
  # -- Affinity for the controller pod
  affinity: {}
  # -- Extra env variables for the controllers pod
  env: []
  # -- Readiness probe definition for the controllers pod
  readinessProbe:
    httpGet:
      path: /readyz
      port: 8081
  # -- Liveness probe definition for the controllers pod
  livenessProbe:
    httpGet:
      path: /healthz
      port: 8081
  resources:
    requests:
      # -- Cpu requests for the controller pod
      cpu: "50m"
      # -- Memory requests for the controller pod
      memory: "50Mi"
    limits:
      # -- Cpu limits for the controller pod
      cpu: "1"
      # -- Memory limits for the controller pod
      # memory: "512Mi"
  webhook:
    # -- Don't enable image caching for pods scheduled into these namespaces
    ignoredNamespaces: []
    # -- Don't enable image caching if the image match the following regexes
    ignoredImages: []
    # -- Enable image caching only if the image matches the following regexes (only applies when not empty)
    acceptedImages: []
    # -- Don't enable image caching if the image is configured with imagePullPolicy: Always
    ignorePullPolicyAlways: true
    # -- If true, create the issuer used to issue the webhook certificate
    createCertificateIssuer: true
    # -- Issuer reference to issue the webhook certificate, ignored if createCertificateIssuer is true
    certificateIssuerRef:
      kind: Issuer
      name: kube-image-keeper-selfsigned-issuer
    objectSelector:
      # -- Run the webhook if the object has matching labels. (See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#labelselectorrequirement-v1-meta)
      matchExpressions: []
  podMonitor:
    # -- Should a PodMonitor object be installed to scrape kuik controller metrics. For prometheus-operator (kube-prometheus) users.
    create: false
    # -- Target scrape interval set in the PodMonitor
    scrapeInterval: 60s
    # -- Target scrape timeout set in the PodMonitor
    scrapeTimeout: 30s
    # -- Additional labels to add to PodMonitor objects
    extraLabels: {}
    # -- Relabel config for the PodMonitor, see: https://coreos.com/operators/prometheus/docs/latest/api.html#relabelconfig
    relabelings: []

proxy:
  image:
    # -- Proxy image repository. Also available: `quay.io/enix/kube-image-keeper`
    repository: gitlab-registry.x.y/shared-resources/cicd.cached-images/ghcr.io-enix-kube-image-keeper
    # -- Proxy image pull policy
    pullPolicy: IfNotPresent
    # -- Proxy image tag. Default chart appVersion
    tag: "1.13.1"
  # -- whether to run the proxy daemonset in hostNetwork mode
  hostNetwork: false
  # -- hostPort used for the proxy pod
  hostPort: 7439
  # -- hostIp used for the proxy pod
  hostIp: "127.0.0.1"
  # -- metricsPort used for the proxy pod (to expose prometheus metrics)
  metricsPort: 8080
  # -- Verbosity level for the proxy pod
  verbosity: 1
  # -- Specify secrets to be used when pulling proxy image
  imagePullSecrets: []
  # -- Annotations to add to the proxy pod
  podAnnotations: {}
  # -- Security context for the proxy pod
  podSecurityContext: {}
  # -- Security context for containers of the proxy pod
  securityContext: {}
  # -- Node selector for the proxy pod
  nodeSelector: {}
  # -- Toleration for the proxy pod
  tolerations:
    - effect: NoSchedule
      operator: Exists
    - key: CriticalAddonsOnly
      operator: Exists
    - effect: NoExecute
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/disk-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/pid-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/unschedulable
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/network-unavailable
      operator: Exists
  # -- Set the PriorityClassName for the proxy pod
  priorityClassName: system-node-critical
  # -- Affinity for the proxy pod
  affinity: {}
  # -- Extra env variables for the proxy pod
  env: []
  # -- Readiness probe definition for the proxy pod
  readinessProbe:
    httpGet:
      path: /readyz
      port: 7439
  # -- Liveness probe definition for the proxy pod
  livenessProbe:
    httpGet:
      path: /healthz
      port: 7439
  resources:
    requests:
      # -- Cpu requests for the proxy pod
      cpu: "50m"
      # -- Memory requests for the proxy pod
      memory: "50Mi"
    limits:
      # -- Cpu limits for the proxy pod
      cpu: "1"
      # -- Memory limits for the proxy pod
      memory: "512Mi"
  podMonitor:
    # -- Should a PodMonitor object be installed to scrape kuik proxy metrics. For prometheus-operator (kube-prometheus) users.
    create: false
    # -- Target scrape interval set in the PodMonitor
    scrapeInterval: 60s
    # -- Target scrape timeout set in the PodMonitor
    scrapeTimeout: 30s
    # -- Additional labels to add to PodMonitor objects
    extraLabels: {}
    # -- Relabel config for the PodMonitor, see: https://coreos.com/operators/prometheus/docs/latest/api.html#relabelconfig
    relabelings: []
  kubeApiRateLimits: {}
    # -- Try higher values if there's a lot of CRDs installed in the cluster and proxy start takes a long time because of throttling
    # qps: 5
    # burst: 10

registry:
  image:
    # -- Registry image repository
    repository: gitlab-prod.x.y/software-engineering/dependency_proxy/containers/registry
    # -- Registry image pull policy
    pullPolicy: IfNotPresent
    # -- Registry image tag
    tag: "3.0.0"
  # -- Number of replicas for the registry pod
  replicas: 4
  persistence:
    # -- If true, enable persistent storage (ignored when using minio or S3)
    enabled: false
  garbageCollection:
    # -- Garbage collector cron schedule. Use standard crontab format.
    schedule: "0 0 * * 0"
    # -- If true, delete untagged manifests. Default to false since there is a known bug in **docker distribution** garbage collect job.
    deleteUntagged: false
    # -- Security context for the garbage collector pod
    podSecurityContext: {}
    # -- Security context for containers of the garbage collector pod
    securityContext: {}
    # -- Specify secrets to be used when pulling garbage-collector image
    imagePullSecrets:
      - name: "dependencyproxy"
    # -- Specify a nodeSelector for the garbage collector pod
    nodeSelector: {}
    # -- Affinity for the garbage collector pod
    affinity: {}
    # -- Toleration for the garbage collector pod
    tolerations: []
    # -- Deadline for the whole job
    activeDeadlineSeconds: 600
    # -- Resources settings for the garbage collector pod
    resources:
      requests:
        # -- Cpu requests for the garbage collector pod
        cpu: "10m"
        # -- Memory requests for the garbage collector pod
        memory: "10Mi"
      limits:
        # -- Cpu limits for the garbage collector pod
        cpu: "1"
        # -- Memory limits for the garbage collector pod
        memory: "512Mi"
    image:
      # -- Cronjob image repository
      repository: gitlab-prod.x.y/software-engineering/dependency_proxy/containers/bitnami/kubectl
      # -- Cronjob image pull policy
      pullPolicy: IfNotPresent
      # -- Cronjob image tag. Default 'latest'
      tag: "1.33.1"
  service:
    # -- Registry service type
    type: ClusterIP
  # -- A secret used to sign state that may be stored with the client to protect against tampering, generated if empty (see https://github.com/distribution/distribution/blob/main/docs/configuration.md#http)
  httpSecret: ""
  # -- Extra env variables for the registry pod
  env:
    - name: REGISTRY_STORAGE_S3_BUCKET
      value: registry                         # default bucket name
    - name: REGISTRY_STORAGE_S3_REGIONENDPOINT
      value: http://kube-image-keeper-minio:9000
    - name: REGISTRY_STORAGE_S3_SECURE        # disable TLS inside the cluster
      value: "false"
    - name: REGISTRY_STORAGE_S3_FORCEPATHSTYLE
      value: "true"
    - name: REGISTRY_LOG_LEVEL
      value: info
    - name: OTEL_TRACES_EXPORTER
      value: none
  readinessProbe:
    httpGet:
      path: /v2/
      port: 5000
  # -- Liveness probe definition for the registry pod
  livenessProbe:
    httpGet:
      path: /v2/
      port: 5000
  resources:
    requests:
      # -- Cpu requests for the registry pod
      cpu: "50m"
      # -- Memory requests for the registry pod
      memory: "256Mi"
    limits:
      # -- Cpu limits for the registry pod
      cpu: "1"
      # -- Memory limits for the registry pod
      memory: "1Gi"
  # -- Specify secrets to be used when pulling registry image
  imagePullSecrets:
    - name: "dependencyproxy"
  # -- Annotations to add to the registry pod
  podAnnotations: {}
  # -- Security context for the registry pod
  podSecurityContext: {}
  # -- Security context for containers of the registry pod
  securityContext: {}
  # -- Node selector for the registry pod
  nodeSelector: {}
  # -- Toleration for the registry pod
  tolerations: []
  # -- Set the PriorityClassName for the registry pod
  priorityClassName: ""
  # -- Affinity for the registry pod
  affinity: {}
  pdb:
    # -- Create a PodDisruptionBudget for the registry
    create: false
    # -- Minimum available pods
    minAvailable: 1
    # -- Maximum unavailable pods
    maxUnavailable: ""
  serviceMonitor:
    # -- Should a ServiceMonitor object be installed to scrape kuik registry metrics. For prometheus-operator (kube-prometheus) users.
    create: false
    # -- Target scrape interval set in the ServiceMonitor
    scrapeInterval: 60s
    # -- Target scrape timeout set in the ServiceMonitor
    scrapeTimeout: 30s
    # -- Additional labels to add to ServiceMonitor objects
    extraLabels: {}
    # -- Relabel config for the ServiceMonitor, see: https://coreos.com/operators/prometheus/docs//api.html#relabelconfig
    relabelings: []
  serviceAccount:
    # -- Create the registry serviceAccount. If false, use serviceAccount with specified name (or "default" if false and name unset.)
    create: true
    # -- Name of the registry serviceAccount (auto-generated if unset and create is true)
    name: ""
    # -- Annotations to add to the registry serviceAccount
    annotations: {}
    # -- Additional labels to add to the registry serviceAccount
    extraLabels: {}

docker-registry-ui:
  # -- If true, enable the registry user interface
  enabled: false
  ui:
    proxy: true
    dockerRegistryUrl: http://kube-image-keeper-registry:5000

minio:
  # -- If true, install minio as a local storage backend for the registry
  enabled: true
  fullnameOverride: "kube-image-keeper-minio"
  mode: distributed
  provisioning:
    enabled: true
    buckets:
      - name: registry
    usersExistingSecrets:
      - kube-image-keeper-minio-registry-users
    extraVolumes:
      - name: registry-keys
        secret:
          defaultMode: 420
          secretName: kube-image-keeper-s3-registry-keys
    extraVolumeMounts:
      - name: registry-keys
        mountPath: /opt/bitnami/minio/svcacct/registry/
    extraCommands:
      - |
        (mc admin user svcacct info provisioning $(cat /opt/bitnami/minio/svcacct/registry/accessKey) || mc admin user svcacct add provisioning registry --access-key "$(cat /opt/bitnami/minio/svcacct/registry/accessKey)" --secret-key "$(cat /opt/bitnami/minio/svcacct/registry/secretKey)")
  metrics:
    enabled: true
    prometheusAuthType: public
  topologySpreadConstraints:
    - maxSkew: 1,
      topologyKey: "topology.x.y/zone"
      whenUnsatisfiable: "DoNotSchedule"
      labelSelector":
        matchLabels:
          app.kubernetes.io/name: "kube-image-keeper"
          app.kubernetes.io/instance: "image-keeper"
          app.kubernetes.io/managed-by: "Helm"
      matchLabelKeys: ["pod-template-hash"]



# (mc admin user svcacct info provisioning $(cat /opt/bitnami/minio/svcacct/registry/accessKey) 2> /dev/null ||
# mc admin user svcacct add
# --access-key "$(cat /opt/bitnami/minio/svcacct/registry/accessKey)"
# --secret-key "$(cat /opt/bitnami/minio/svcacct/registry/secretKey)"
# provisioning registry) > /dev/null

rbac:
  # -- Create the ClusterRole and ClusterRoleBinding. If false, need to associate permissions with serviceAccount outside this Helm chart.
  create: true

serviceAccount:
  # -- Create the serviceAccount. If false, use serviceAccount with specified name (or "default" if false and name unset.)
  create: true
  # -- Name of the serviceAccount (auto-generated if unset and create is true)
  name: "kuik"
  # -- Annotations to add to the serviceAccount
  annotations: {}
  # -- Additional labels to add to the serviceAccount
  extraLabels: {}

psp:
  # -- If True, create the PodSecurityPolicy
  create: false

manifest pod

**Pod( kubectl describe) **

  Name:             keycloak-keycloakx-0
  Namespace:        keycloak-dev
  Priority:         0
  Service Account:  keycloak-keycloakx
  Node:             shared-infra-dev-worker-k92nk-6dx5m-2cnc2/10.70.104.15
  Start Time:       Mon, 11 Aug 2025 09:37:41 +0200
  Labels:           app.kubernetes.io/instance=keycloak
                    app.kubernetes.io/name=keycloakx
                    apps.kubernetes.io/pod-index=0
                    controller-revision-hash=keycloak-keycloakx-5ff998ffcc
                    kuik.enix.io/managed=true
                    statefulset.kubernetes.io/pod-name=keycloak-keycloakx-0
  Annotations:      checksum/config-startup: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
                    checksum/secrets: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
                    kubectl.kubernetes.io/restartedAt: 2025-08-11T07:42:34+02:00
                    kuik.enix.io/rewrite-images: true
                    original-init-image-dbchecker: gitlab-registry.x.y/shared-resources/cicd.cached-images/docker.io-busybox:1.32
                    rollme: "bCPvx7r1"
  Status:           Running
  IP:               x.x.x.x
  IPs:
    IP:           x.x.x.x
  Controlled By:  StatefulSet/keycloak-keycloakx
  Init Containers:
    dbchecker:
      Container ID:  containerd://d065aa6566cfa4af193cafa16e1cd506269a5496634f84bc5eb8f17bf7d2fede
      Image:         localhost:7439/gitlab-registry.x.y/shared-resources/cicd.cached-images/docker.io-busybox:1.32
      Image ID:      localhost:7439/gitlab-registry.x.y/shared-resources/cicd.cached-images/docker.io-busybox@sha256:1ccc0a0ca577e5fb5a0bdf2150a1a9f842f47c8865e861fa0062c5d343eb8cac
      Port:          <none>
      Host Port:     <none>
      Command:
        sh
        -c
        echo 'Waiting for Database to become ready...'
        
        until printf "." && nc -z -w 2 postgres-dev.x.y 5432; do
            sleep 2;
        done;
        
        echo 'Database OK Ô£ô'
        
      State:          Terminated
        Reason:       Completed
        Exit Code:    0
        Started:      Mon, 11 Aug 2025 09:37:42 +0200
        Finished:     Mon, 11 Aug 2025 09:37:42 +0200
      Ready:          True
      Restart Count:  0
      Limits:
        cpu:     20m
        memory:  128Mi
      Requests:
        cpu:        20m
        memory:     32Mi
      Environment:  <none>
      Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdcpm (ro)
    theme-provider:
      Container ID:  containerd://3b84549d1fba0003ff41c41385854c9fa9548289cd2abb3ea436350b435b8846
      Image:         gitlab-registry.x.y/software-engineering/customer-portal/customer-portal.keycloak-theme/app-image:2.1.3
      Image ID:      gitlab-registry.x.y/software-engineering/customer-portal/customer-portal.keycloak-theme/app-image@sha256:6c2aabc28e751a736a523593e0ae9bd9bd897990a61e0458b7ac20a19cb9d989
      Port:          <none>
      Host Port:     <none>
      Command:
        /theme-installer.sh
      State:          Terminated
        Reason:       Completed
        Exit Code:    0
        Started:      Mon, 11 Aug 2025 09:37:42 +0200
        Finished:     Mon, 11 Aug 2025 09:37:42 +0200
      Ready:          True
      Restart Count:  0
      Environment:    <none>
      Mounts:
        /tmp/cupo/ from theme (rw)
        /tmp/rescue/ from rescue (rw)
        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdcpm (ro)
  Containers:
    keycloak:
      Container ID:  containerd://cec2463f961e0186604ab973dda98c8efdc9cd33f103e630ebe77c896d056d9f
      Image:         gitlab-registry.x.y/devops-sre/sre.shared-services.keycloak/keycloak-26.3.2:0.12.0
      Image ID:      gitlab-registry.x.y/devops-sre/sre.shared-services.keycloak/keycloak-26.3.2@sha256:6886c8bd53bd7fa05a6807eadec73add6e8e6d401821e132bc3ea5602b00219f
      Ports:         8080/TCP, 9000/TCP, 8443/TCP
      Host Ports:    0/TCP, 0/TCP, 0/TCP
      Command:
        /bin/sh
        -c
        cp -R /opt/keycloak/tmp_providers/. /opt/keycloak/providers/ && \
        /opt/keycloak/bin/kc.sh build --http-relative-path=/authx --transaction-xa-enabled=true && \
        /opt/keycloak/bin/kc.sh start --cache=ispn --cache-stack=kubernetes --cache-config-file=/cache.xml \
          --proxy-headers=xforwarded  --http-enabled=true --hostname-strict=false  \
          --hostname=auth-devops-dev.x.y  --log-level=info --optimized \
          --spi-brute-force-protector-default-brute-force-detector-allow-concurrent-requests=true
        
      State:          Running
        Started:      Mon, 11 Aug 2025 09:37:44 +0200
      Ready:          True
      Restart Count:  0
      Limits:
        memory:  4Gi
      Requests:
        cpu:      500m
        memory:   1Gi
      Liveness:   http-get http://:http-internal/authx/health/live delay=0s timeout=5s period=10s #success=1 #failure=3
      Readiness:  http-get http://:http-internal/authx/health/ready delay=10s timeout=1s period=10s #success=1 #failure=3
      Startup:    http-get http://:http-internal/authx/health delay=15s timeout=1s period=5s #success=1 #failure=60
      Environment:
        KC_HTTP_RELATIVE_PATH:             /authx
        KC_CACHE:                          ispn
        KC_CACHE_STACK:                    kubernetes
        KC_PROXY_HEADERS:                  forwarded
        KC_HTTP_ENABLED:                   true
        KC_DB:                             postgres
        KC_DB_URL_HOST:                    postgres-dev.x.y
        KC_DB_URL_PORT:                    5432
        KC_DB_URL_DATABASE:                ops_keycloak_dev
        KC_METRICS_ENABLED:                true
        KC_HEALTH_ENABLED:                 true

        KEYCLOAK_INSTANCE:                 develop
        PROMETHEUS_GROUPING_KEY_INSTANCE:  develop
        URI_METRICS_ENABLED:               true
      Mounts:
        /opt/keycloak/conf/cache.xml from ispn (ro,path="cache.xml")
        /opt/keycloak/themes/rescue from rescue (rw,path="rescue/")
        /opt/keycloak/tmp_providers from theme (rw)
        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdcpm (ro)
  Conditions:
    Type                        Status
    PodReadyToStartContainers   True 
    Initialized                 True 
    Ready                       True 
    ContainersReady             True 
    PodScheduled                True 
  Volumes:
    theme:
      Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
      Medium:     
      SizeLimit:  <unset>
    rescue:
      Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
      Medium:     
      SizeLimit:  <unset>
    kube-api-access-cdcpm:
      Type:                    Projected (a volume that contains injected data from multiple sources)
      TokenExpirationSeconds:  3607
      ConfigMapName:           kube-root-ca.crt
      ConfigMapOptional:       <nil>
      DownwardAPI:             true
  QoS Class:                   Burstable
  Node-Selectors:              <none>
  Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                               node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  Events:                      <none>

manifests Repository

private container registry, kubectl describe, manually created

Name:         gitlab-registry.x.y-devops-sre-sre.shared-services.keycloak-keycloak-26.3.2
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  kuik.enix.io/v1alpha1
Kind:         Repository
Metadata:
  Creation Timestamp:  2025-08-08T12:22:22Z
  Finalizers:
    repository.kuik.enix.io/finalizer
  Generation:        2
  Resource Version:  89630595
  UID:               4524dff9-07e9-4bec-875e-fce695d91473
Spec:
  Name:  gitlab-registry.x.y/devops-sre/sre.shared-services.keycloak/keycloak-26.3.2
  Pull Secret Names:
    gitlab-registry-sre
    gitlab-registry-se
  Pull Secrets Namespace:  kuik-system
Status:
  Conditions:
    Last Transition Time:  2025-08-08T19:04:45Z
    Message:               All images have been cached
    Reason:                UpToDate
    Status:                True
    Type:                  Ready
  Images:                  1
  Phase:                   Ready
Events:                    <none>

public container registry, kubectl describe, automatically created

apiVersion: kuik.enix.io/v1alpha1
kind: Repository
metadata:
  creationTimestamp: "2025-07-28T07:37:53Z"
  finalizers:
  - repository.kuik.enix.io/finalizer
  generation: 1
  name: gitlab-registry.x.y-shared-resources-cicd.cached-images-docker.io-busybox
  resourceVersion: "89397092"
  uid: 0deeeb33-b141-4135-921e-95192bf398e7
spec:
  name: gitlab-registry.x.y/shared-resources/cicd.cached-images/docker.io-busybox
  pullSecretNames:
  - gitlab-registry-se
  - gitlab-devops
  pullSecretsNamespace: keycloak-dev
status:
  conditions:
  - lastTransitionTime: "2025-08-08T09:03:56Z"
    message: All images have been cached
    reason: UpToDate
    status: "True"
    type: Ready
  images: 1
  phase: Ready

private container registry, kubectl describe, manually created

Name:         gitlab-registry.x.y-software-engineering-customer-portal-customer-portal.keycloak-theme-app-image
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  kuik.enix.io/v1alpha1
Kind:         Repository
Metadata:
  Creation Timestamp:  2025-08-08T12:22:23Z
  Finalizers:
    repository.kuik.enix.io/finalizer
  Generation:        2
  Resource Version:  89630593
  UID:               7ce0cae3-223d-4dda-bbcc-a8f1cd9b6542
Spec:
  Name:  gitlab-registry.x.y/software-engineering/customer-portal/customer-portal.keycloak-theme/app-image
  Pull Secret Names:
    gitlab-registry-sre
    gitlab-registry-se
  Pull Secrets Namespace:  kuik-system
Status:
  Conditions:
    Last Transition Time:  2025-08-08T19:04:45Z
    Message:               All images have been cached
    Reason:                UpToDate
    Status:                True
    Type:                  Ready
  Images:                  1
  Phase:                   Ready
Events:                    <none>

manifests CachedImage

container manually pre cached, kubectl describe, private container registry

kind: CachedImage
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"kuik.enix.io/v1alpha1","kind":"CachedImage","metadata":{"annotations":{},"name":"gitlab-registry.x.y-devops-sre-sre.shared-services.keycloak-keycloak-26.3.2-0.12.0"},"spec":{"sourceImage":"gitlab-registry.x.y/devops-sre/sre.shared-services.keycloak/keycloak-26.3.2:0.12.0"}}
  creationTimestamp: "2025-08-08T12:22:22Z"
  finalizers:
  - cachedimage.kuik.enix.io/finalizer
  generation: 2
  labels:
    kuik.enix.io/repository: f0da2feffdb827c0897d749a8123167cff7a0cd56cb3420003bf5f98
  name: gitlab-registry.x.y-devops-sre-sre.shared-services.keycloak-keycloak-26.3.2-0.12.0
  ownerReferences:
  - apiVersion: kuik.enix.io/v1alpha1
    kind: Repository
    name: gitlab-registry.x.y-devops-sre-sre.shared-services.keycloak-keycloak-26.3.2
    uid: 4524dff9-07e9-4bec-875e-fce695d91473
  resourceVersion: "89477299"
  uid: 1b1c0aee-7dea-457e-9dae-0fedcbb684f3
spec:
  expiresAt: "2025-09-07T12:22:22Z"
  sourceImage: gitlab-registry.x.y/devops-sre/sre.shared-services.keycloak/keycloak-26.3.2:0.12.0
status:
  availableUpstream: true
  digest: 6886c8bd53bd7fa05a6807eadec73add6e8e6d401821e132bc3ea5602b00219f
  isCached: true
  lastSeenUpstream: "2025-08-08T12:23:53Z"
  lastSuccessfulPull: "2025-08-08T12:23:58Z"
  lastSync: "2025-08-08T12:23:58Z"
  phase: Ready
  upToDate: true
  upstreamDigest: 6886c8bd53bd7fa05a6807eadec73add6e8e6d401821e132bc3ea5602b00219f
  usedBy: {}

initcontainer 1 automatically pre cached, kubectl describe, public container registry

Namespace:    
Labels:       kuik.enix.io/repository=20bb55d2561f865fa8146986d85767a34b05160d436c730a26d7f36e
Annotations:  <none>
API Version:  kuik.enix.io/v1alpha1
Kind:         CachedImage
Metadata:
  Creation Timestamp:  2025-07-28T07:37:53Z
  Finalizers:
    cachedimage.kuik.enix.io/finalizer
  Generation:  5
  Owner References:
    API Version:     kuik.enix.io/v1alpha1
    Kind:            Repository
    Name:            gitlab-registry.x.y-shared-resources-cicd.cached-images-docker.io-busybox
    UID:             0deeeb33-b141-4135-921e-95192bf398e7
  Resource Version:  91019890
  UID:               8b5c1ee2-b4dd-4187-bc7d-dec8a53a25f4
Spec:
  Source Image:  gitlab-registry.x.y/shared-resources/cicd.cached-images/docker.io-busybox:1.32
Status:
  Available Upstream:    true
  Digest:                1ccc0a0ca577e5fb5a0bdf2150a1a9f842f47c8865e861fa0062c5d343eb8cac
  Is Cached:             true
  Last Seen Upstream:    2025-08-07T13:03:37Z
  Last Successful Pull:  2025-08-07T13:03:38Z
  Last Sync:             2025-08-07T13:03:38Z
  Phase:                 Ready
  Up To Date:            true
  Upstream Digest:       1ccc0a0ca577e5fb5a0bdf2150a1a9f842f47c8865e861fa0062c5d343eb8cac
  Used By:
    Count:  3
    Pods:
      Namespaced Name:  keycloak-dev/keycloak-keycloakx-0
      Namespaced Name:  keycloak-dev/keycloak-keycloakx-1
      Namespaced Name:  keycloak-dev/keycloak-keycloakx-2
Events:                 <none>

initcontainer 2 manually pre cached, kubectl describe, private container registry

Namespace:    
Labels:       kuik.enix.io/repository=138ef1e9271d694c8a4eb4ab4adcd25ad22b9495f4d765a4f8a8918a
Annotations:  <none>
API Version:  kuik.enix.io/v1alpha1
Kind:         CachedImage
Metadata:
  Creation Timestamp:  2025-08-08T12:23:40Z
  Finalizers:
    cachedimage.kuik.enix.io/finalizer
  Generation:  2
  Owner References:
    API Version:     kuik.enix.io/v1alpha1
    Kind:            Repository
    Name:            gitlab-registry.x.y-software-engineering-customer-portal-customer-portal.keycloak-theme-app-image
    UID:             7ce0cae3-223d-4dda-bbcc-a8f1cd9b6542
  Resource Version:  89477173
  UID:               d48ef846-2d0b-421e-b790-9251a6d3acf1
Spec:
  Expires At:    2025-09-07T12:23:40Z
  Source Image:  gitlab-registry.x.y/software-engineering/customer-portal/customer-portal.keycloak-theme/app-image:2.1.3
Status:
  Available Upstream:    true
  Digest:                6c2aabc28e751a736a523593e0ae9bd9bd897990a61e0458b7ac20a19cb9d989
  Is Cached:             true
  Last Seen Upstream:    2025-08-08T12:23:40Z
  Last Successful Pull:  2025-08-08T12:23:41Z
  Last Sync:             2025-08-08T12:23:41Z
  Phase:                 Ready
  Up To Date:            true
  Upstream Digest:       6c2aabc28e751a736a523593e0ae9bd9bd897990a61e0458b7ac20a19cb9d989
  Used By:
Events:  <none>

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions