-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
What broke? What's expected?
The helm/v2-alpha plugin hardcodes checks for name: manager in 5 different templating functions, causing it to silently skip templating when users customize the container name. This affects critical fields like image references, resources, environment variables, security contexts, and container arguments.
Expected behavior:
When running kubebuilder edit --plugins=helm/v2-alpha, the Helm chart should properly template deployment fields regardless of the container name.
Actual behavior:
The Helm chart generation silently skips templating if the container name is anything other than manager, resulting in hardcoded values instead of Helm template expressions.
Affected functions in helm_templater.go:
Line 533: templateEnvironmentVariables() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 586: templateResources() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 773: templateContainerSecurityContext() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 839: templateControllerManagerArgs() - checks: if !strings.Contains(yamlContent, "name: manager")
Line 940: templateImageReference() - checks: if !strings.Contains(yamlContent, "name: manager")Example of the problem:
With name: manager:
image: "{{ .Values.manager.image.repository }}:{{ .Values.manager.image.tag }}"
imagePullPolicy: {{ .Values.manager.image.pullPolicy }}With name: osiris-manager:
image: docker.io/server/osiris:1.0.5 # ❌ Hardcoded!
# imagePullPolicy is missing entirelyWhy this matters:
- Users should be able to customize container names to match their project conventions
- The scaffolded YAML includes
kubectl.kubernetes.io/default-container: managerannotation specifically to support custom container names - No warning or error is shown to users when templating is skipped
- This breaks the entire purpose of generating a Helm chart
Reproducing this issue
Reproducing this issue
- Initialize a new kubebuilder project:
kubebuilder init --domain example.com --repo example.com/myproject- Edit manager.yaml and change the container name:
containers:
- name: myproject-manager # Changed from "manager"
image: controller:latest
# ... rest of config- Generate Helm chart:
export IMG=myregistry.com/myproject:v1.0.0
kubebuilder edit --plugins=helm/v2-alpha --force- Check the generated chart:
cat dist/chart/templates/manager/manager.yaml | grep "image:"Expected result:
image: "{{ .Values.manager.image.repository }}:{{ .Values.manager.image.tag }}"Actual result:
image: myregistry.com/myproject:v1.0.0Full diff showing the problem
# Expected (with name: manager)
containers:
- name: manager
image: "{{ .Values.manager.image.repository }}:{{ .Values.manager.image.tag }}"
imagePullPolicy: {{ .Values.manager.image.pullPolicy }}
resources:
{{- if .Values.manager.resources }}
{{- toYaml .Values.manager.resources | nindent 4 }}
{{- else }}
{}
{{- end }}
# Actual (with name: custom-manager)
containers:
- name: custom-manager
image: myregistry.com/myproject:v1.0.0 # ❌ Hardcoded
resources: # ❌ Not templated
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64MiKubeBuilder (CLI) Version
4.9.0 and also checked with the master (commit: 17a9dda4e)
PROJECT version
3
Plugin versions
layout:
- go.kubebuilder.io/v4
plugins:
helm.kubebuilder.io/v2-alpha:
manifests: dist/install.yaml
output: distOther versions
- Go: 1.23+
- Kubernetes: v1.31+
- This bug exists in the plugin code itself, not dependent on runtime versions
Extra Labels
No response