Merging changes as mentioned in SREP-3765#581
Merging changes as mentioned in SREP-3765#581chamalabey wants to merge 10 commits intoopenshift:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chamalabey The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughTwo Tekton PipelineRun manifests were changed to embed full inline Changes
Sequence Diagram(s)sequenceDiagram
participant PR as PipelineRun
participant TC as Tekton Controller
participant GIT as Git Repo
participant BUILDER as Build Tasks
participant SCANNER as Scanners
participant REG as Image Registry
participant SIGN as Sign/Attest
PR->>TC: submit PipelineRun with inline pipelineSpec
TC->>GIT: clone repository (git-url, revision) -> populate workspace
TC->>BUILDER: run build tasks (build-container -> build-image-index -> build-source-image)
BUILDER->>REG: push image(s) -> return IMAGE_URL / IMAGE_DIGEST
TC->>SCANNER: run vulnerability/SBOM scans (conditional via params)
SCANNER-->>TC: produce SBOM/scan reports
TC->>SIGN: run signing/attestation tasks (conditional)
SIGN->>REG: attach signatures/attestations
TC->>PR: run finally tasks (show-sbom) and set PipelineRun results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…ission webhook pipelinerun-kueue-defaulter.tekton-kueue.io denied the request: expected exactly one, got both: pipelineRef, pipelineSpec'
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.tekton/managed-upgrade-operator-pull-request.yaml:
- Line 37: The PipelineRun spec currently includes both pipelineRef and
pipelineSpec which violates Tekton v1 (PipelineRun.spec must contain exactly
one); remove either the pipelineRef block or the pipelineSpec block so only one
remains, ensuring the remaining configuration is a valid PipelineRun.spec
(update any references to the removed symbol—pipelineRef or pipelineSpec—and
validate the final PipelineRun against Tekton schema).
ℹ️ Review info
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
.tekton/managed-upgrade-operator-pull-request.yaml.tekton/managed-upgrade-operator-push.yaml
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.tekton/managed-upgrade-operator-push.yaml:
- Around line 177-188: The prefetch-dependencies task can run even when
clone-repository was skipped because init.results.build is false; add a Tekton
when-gate to prevent execution unless init signaled a build. Update the
prefetch-dependencies task (name: prefetch-dependencies) to include a when block
that checks $(tasks.init.results.build) (e.g. when: - input:
"$(tasks.init.results.build)" operator: in values: ["true"]) so the task only
runs when init indicates a build and thus clone-repository ran.
ℹ️ Review info
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
.tekton/managed-upgrade-operator-push.yaml
| - name: prefetch-dependencies | ||
| params: | ||
| - name: input | ||
| value: $(params.prefetch-input) | ||
| - name: SOURCE_ARTIFACT | ||
| value: $(tasks.clone-repository.results.SOURCE_ARTIFACT) | ||
| - name: ociStorage | ||
| value: $(params.output-image).prefetch | ||
| - name: ociArtifactExpiresAfter | ||
| value: $(params.image-expires-after) | ||
| runAfter: | ||
| - clone-repository |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
FILE=".tekton/managed-upgrade-operator-push.yaml"
python3 - <<'PY'
import re, pathlib
p = pathlib.Path(".tekton/managed-upgrade-operator-push.yaml")
t = p.read_text()
m = re.search(r'\n\s+tasks:\n(.*)\n\s+workspaces:\n', t, re.S)
if not m:
raise SystemExit("Could not locate pipelineSpec.tasks/workspaces block")
tasks = re.split(r'\n\s+- name:\s+', m.group(1))[1:]
print("Tasks using clone-repository results and whether they are gated by init.build:")
for block in tasks:
name = block.splitlines()[0].strip()
uses_clone = '$(tasks.clone-repository.results.' in block
has_build_when = '$(tasks.init.results.build)' in block
if uses_clone:
print(f"- {name}: uses clone results={uses_clone}, when(init.build)={has_build_when}")
PY
echo
echo "Context around prefetch-dependencies:"
rg -n -C4 'name:\s+prefetch-dependencies|tasks\.clone-repository\.results|tasks\.init\.results\.build' "$FILE"Repository: openshift/managed-upgrade-operator
Length of output: 3334
🏁 Script executed:
#!/bin/bash
FILE=".tekton/managed-upgrade-operator-push.yaml"
# Extract the complete prefetch-dependencies task
rg -n -A 30 'name:\s+prefetch-dependencies' "$FILE" | head -50Repository: openshift/managed-upgrade-operator
Length of output: 1196
Add the missing build gate on prefetch-dependencies.
The prefetch-dependencies task at line 182 consumes $(tasks.clone-repository.results.SOURCE_ARTIFACT), but it lacks the when: - input: $(tasks.init.results.build) gate. Since clone-repository can be skipped when init.results.build is false, this creates a state where prefetch-dependencies could execute without its required dependency being available, breaking the run.
Proposed fix
- name: prefetch-dependencies
params:
- name: input
value: $(params.prefetch-input)
- name: SOURCE_ARTIFACT
value: $(tasks.clone-repository.results.SOURCE_ARTIFACT)
- name: ociStorage
value: $(params.output-image).prefetch
- name: ociArtifactExpiresAfter
value: $(params.image-expires-after)
runAfter:
- clone-repository
taskRef:
params:
- name: name
value: prefetch-dependencies-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-prefetch-dependencies-oci-ta:0.2@sha256:dc82a7270aace9b1c26f7e96f8ccab2752e53d32980c41a45e1733baad76cde6
- name: kind
value: task
resolver: bundles
+ when:
+ - input: $(tasks.init.results.build)
+ operator: in
+ values:
+ - "true"
workspaces:
- name: git-basic-auth
workspace: git-auth🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.tekton/managed-upgrade-operator-push.yaml around lines 177 - 188, The
prefetch-dependencies task can run even when clone-repository was skipped
because init.results.build is false; add a Tekton when-gate to prevent execution
unless init signaled a build. Update the prefetch-dependencies task (name:
prefetch-dependencies) to include a when block that checks
$(tasks.init.results.build) (e.g. when: - input: "$(tasks.init.results.build)"
operator: in values: ["true"]) so the task only runs when init indicates a build
and thus clone-repository ran.
…latest cosigned image
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.tekton/managed-upgrade-operator-pull-request.yaml (1)
37-608: Consider adding a short sync marker for upstream provenance.Since this is a large inlined pipeline, add a small annotation/comment indicating the upstream catalog source/revision used for this snapshot to reduce future drift during manual updates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/managed-upgrade-operator-pull-request.yaml around lines 37 - 608, Add an explicit upstream provenance marker to this inlined Tekton pipeline by adding an annotation (e.g. metadata.annotations: konflux-ci/upstream: "quay.io/konflux-ci/tekton-catalog/pipeline-docker-build-oci-ta@sha256:<sha>" and/or a short YAML comment) near the top of the pipeline (look for pipelineSpec and its description block) that records the original catalog bundle name and digest/revision used to produce this snapshot; ensure the annotation key is stable and human-readable so future maintainers can trace the upstream source and revision easily.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.tekton/managed-upgrade-operator-pull-request.yaml:
- Around line 37-608: Add an explicit upstream provenance marker to this inlined
Tekton pipeline by adding an annotation (e.g. metadata.annotations:
konflux-ci/upstream:
"quay.io/konflux-ci/tekton-catalog/pipeline-docker-build-oci-ta@sha256:<sha>"
and/or a short YAML comment) near the top of the pipeline (look for pipelineSpec
and its description block) that records the original catalog bundle name and
digest/revision used to produce this snapshot; ensure the annotation key is
stable and human-readable so future maintainers can trace the upstream source
and revision easily.
ℹ️ Review info
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
.tekton/managed-upgrade-operator-pull-request.yaml
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.tekton/managed-upgrade-operator-pull-request.yaml (1)
43-47: Add explicit build guards for tasks consumingbuild-image-indexresults.Tasks in this range consume
$(tasks.build-image-index.results.*)but usually only gate onskip-checks. Since build gating is driven by$(tasks.init.results.build)(e.g., Line 173), consider adding the same explicitwhenguard here (and infinally.show-sbom) to avoid implicit skip-by-missing-result behavior and make execution intent clearer.♻️ Suggested pattern
finally: - name: show-sbom + when: + - input: $(tasks.init.results.build) + operator: in + values: + - "true" params: - name: IMAGE_URL value: $(tasks.build-image-index.results.IMAGE_URL) @@ - name: deprecated-base-image-check @@ when: + - input: $(tasks.init.results.build) + operator: in + values: + - "true" - input: $(params.skip-checks) operator: in values: - "false"Reference:
- https://tekton.dev/vault/pipelines-v0.56.x-lts/pipelines/ (guarded task behavior and result-dependent skips)
- https://tekton.dev/vault/Pipelines-v1.3.x-LTS/pipelines/ (result consumption / finally behavior)
Also applies to: 312-603
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/managed-upgrade-operator-pull-request.yaml around lines 43 - 47, Tasks that consume $(tasks.build-image-index.results.*) lack an explicit build guard and currently only gate on skip-checks; add a when condition mirroring the build gate used elsewhere (check $(tasks.init.results.build) == "true" or similar) to every task that reads build-image-index results and to the finally task show-sbom so they only run when the init build result indicates a build should run; locate task definitions referencing $(tasks.build-image-index.results.IMAGE_URL) and update them (and the finally: - name: show-sbom entry) to include the same when guard used at init (e.g., compare $(tasks.init.results.build) to "true") to make the execution intent explicit and avoid implicit skip-by-missing-result behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.tekton/managed-upgrade-operator-pull-request.yaml:
- Around line 43-47: Tasks that consume $(tasks.build-image-index.results.*)
lack an explicit build guard and currently only gate on skip-checks; add a when
condition mirroring the build gate used elsewhere (check
$(tasks.init.results.build) == "true" or similar) to every task that reads
build-image-index results and to the finally task show-sbom so they only run
when the init build result indicates a build should run; locate task definitions
referencing $(tasks.build-image-index.results.IMAGE_URL) and update them (and
the finally: - name: show-sbom entry) to include the same when guard used at
init (e.g., compare $(tasks.init.results.build) to "true") to make the execution
intent explicit and avoid implicit skip-by-missing-result behavior.
ℹ️ Review info
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
.tekton/managed-upgrade-operator-pull-request.yaml
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.tekton/managed-upgrade-operator-pull-request.yaml (1)
37-609: Consider reducing duplication between pull and push Tekton manifests.The large inline
pipelineSpechere is nearly identical to.tekton/managed-upgrade-operator-push.yaml; this increases drift risk during future task-bundle or graph updates. A shared template/base generation step would make future updates safer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/managed-upgrade-operator-pull-request.yaml around lines 37 - 609, The pipelineSpec in this manifest duplicates the same graph used in the other manifest (same tasks like init, clone-repository, build-container, build-image-index, etc.), so extract the shared pipeline definition into a single reusable source and reference it from both pull and push manifests; implement this by moving the common pipelineSpec into a shared YAML (or Tekton bundle/template) that exposes differing bits as params (e.g., IMAGE_URL, git-url, build-source-image) and update both .tekton manifests to import or reference that shared pipeline (or use YAML anchors/Helm/Kustomize) so only the small per-flow differences remain in each file and task names such as build-container and build-image-index continue to match the shared spec.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.tekton/managed-upgrade-operator-pull-request.yaml:
- Around line 37-609: The pipelineSpec in this manifest duplicates the same
graph used in the other manifest (same tasks like init, clone-repository,
build-container, build-image-index, etc.), so extract the shared pipeline
definition into a single reusable source and reference it from both pull and
push manifests; implement this by moving the common pipelineSpec into a shared
YAML (or Tekton bundle/template) that exposes differing bits as params (e.g.,
IMAGE_URL, git-url, build-source-image) and update both .tekton manifests to
import or reference that shared pipeline (or use YAML anchors/Helm/Kustomize) so
only the small per-flow differences remain in each file and task names such as
build-container and build-image-index continue to match the shared spec.
ℹ️ Review info
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
.tekton/managed-upgrade-operator-pull-request.yaml.tekton/managed-upgrade-operator-push.yaml
|
@chamalabey: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
cleanup/refactor)
What this PR does / why we need it?
Currently, there are some konflux configuration updates PRs are closed due to they are too old and never passed the pre-merge check. This leads to konflux is not fully functional on MUO. We need to manually apply those changes so that the konflux configuration is up-to-date
Closed PRs
https://github.com/openshift/managed-upgrade-operator/pull/541/changes
Some config was deleted accidentally by PR
https://github.com/openshift/managed-upgrade-operator/pull/555/changes
The change in tekton should be reverted
Which Jira/Github issue(s) this PR fixes?
Fixes #SREP-3765
Special notes for your reviewer:
Pre-checks (if applicable):
Summary by CodeRabbit