Skip to content

helm/v2-alpha: Helm plugin fails to template deployment fields when container name is not "manager" #5449

@AlirezaPourchali

Description

@AlirezaPourchali

What broke? What's expected?

The helm/v2-alpha plugin hardcodes checks for name: manager in 5 different templating functions, causing it to silently skip templating when users customize the container name. This affects critical fields like image references, resources, environment variables, security contexts, and container arguments.

Expected behavior:
When running kubebuilder edit --plugins=helm/v2-alpha, the Helm chart should properly template deployment fields regardless of the container name.

Actual behavior:
The Helm chart generation silently skips templating if the container name is anything other than manager, resulting in hardcoded values instead of Helm template expressions.

Affected functions in helm_templater.go:

Line 533: templateEnvironmentVariables() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 586: templateResources() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 773: templateContainerSecurityContext() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 839: templateControllerManagerArgs() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 940: templateImageReference() - checks: if !strings.Contains(yamlContent, "name: manager")

Example of the problem:

With name: manager:

image: "{{ .Values.manager.image.repository }}:{{ .Values.manager.image.tag }}"
imagePullPolicy: {{ .Values.manager.image.pullPolicy }}

With name: osiris-manager:

image: docker.io/server/osiris:1.0.5  # ❌ Hardcoded!
# imagePullPolicy is missing entirely

Why this matters:

  1. Users should be able to customize container names to match their project conventions
  2. The scaffolded YAML includes kubectl.kubernetes.io/default-container: manager annotation specifically to support custom container names
  3. No warning or error is shown to users when templating is skipped
  4. This breaks the entire purpose of generating a Helm chart

Reproducing this issue

Reproducing this issue

  1. Initialize a new kubebuilder project:
kubebuilder init --domain example.com --repo example.com/myproject
  1. Edit manager.yaml and change the container name:
containers:
- name: myproject-manager  # Changed from "manager"
  image: controller:latest
  # ... rest of config
  1. Generate Helm chart:
export IMG=myregistry.com/myproject:v1.0.0
kubebuilder edit --plugins=helm/v2-alpha --force
  1. Check the generated chart:
cat dist/chart/templates/manager/manager.yaml | grep "image:"

Expected result:

image: "{{ .Values.manager.image.repository }}:{{ .Values.manager.image.tag }}"

Actual result:

image: myregistry.com/myproject:v1.0.0
Full diff showing the problem
# Expected (with name: manager)
containers:
- name: manager
  image: "{{ .Values.manager.image.repository }}:{{ .Values.manager.image.tag }}"
  imagePullPolicy: {{ .Values.manager.image.pullPolicy }}
  resources:
    {{- if .Values.manager.resources }}
    {{- toYaml .Values.manager.resources | nindent 4 }}
    {{- else }}
    {}
    {{- end }}

# Actual (with name: custom-manager)
containers:
- name: custom-manager
  image: myregistry.com/myproject:v1.0.0  # ❌ Hardcoded
  resources:                               # ❌ Not templated
    limits:
      cpu: 500m
      memory: 128Mi
    requests:
      cpu: 10m
      memory: 64Mi

KubeBuilder (CLI) Version

4.9.0 and also checked with the master (commit: 17a9dda4e)

PROJECT version

3

Plugin versions

layout:
- go.kubebuilder.io/v4
plugins:
  helm.kubebuilder.io/v2-alpha:
    manifests: dist/install.yaml
    output: dist

Other versions

  • Go: 1.23+
  • Kubernetes: v1.31+
  • This bug exists in the plugin code itself, not dependent on runtime versions

Extra Labels

No response

Metadata

Metadata

Labels

kind/bugCategorizes issue or PR as related to a bug.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions