Skip to content

Conversation

@kusumachalasani
Copy link
Contributor

@kusumachalasani kusumachalasani commented Jan 24, 2026

Description

Include a Application metrics exposure guide to generate runtime recommendations

Fixes # (issue)

Type of change

  • Bug fix
  • New feature
  • [ X] Docs update
  • Breaking change (What changes might users need to make in their application due to this PR?)
  • Requires DB changes

Summary by Sourcery

Documentation:

  • Introduce an application metrics exposure guide covering configuration for Spring Boot, Quarkus, and plain Java applications, along with OpenShift user workload monitoring and Prometheus scrape setup.

Signed-off-by: kusuma chalasani <kchalasa@redhat.com>
@kusumachalasani kusumachalasani self-assigned this Jan 24, 2026
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jan 24, 2026

Reviewer's Guide

Adds a new documentation page describing how to expose application metrics for Prometheus so that runtime recommendations can be generated, covering OpenShift user workload monitoring enablement, framework-specific metric exposure (Spring Boot, Quarkus, plain Java with JMX exporter), and Prometheus scrape configuration examples.

Sequence diagram for Prometheus scraping application metrics

sequenceDiagram
  participant PROM as prometheus_instance
  participant APP as application
  participant ME as metrics_endpoint

  PROM->>APP: open_http_connection
  APP->>ME: route_request_to_metrics_endpoint
  ME-->>PROM: respond_with_prometheus_formatted_metrics
  PROM->>PROM: store_samples_and_evaluate_rules
  PROM-->>APP: close_connection
Loading

Flow diagram for configuring application metrics exposure by runtime

flowchart TD
  START[start]
  RUNTIME["Select application runtime"]
  DECIDE_RUNTIME{runtime_type}
  SB[spring_boot_configuration]
  QK[quarkus_configuration]
  PJ[plain_java_jmx_exporter_configuration]
  SB_STEPS["Add_actuator_and_micrometer_dependencies_and_enable_prometheus_endpoint"]
  QK_STEPS["Add_quarkus_micrometer_prometheus_extension_and_enable_export"]
  PJ_STEPS["Run_with_jmx_prometheus_javaagent_and_config_file"]
  PROM_CFG["configure_prometheus_scrape_job"]
  END[end]

  START --> RUNTIME --> DECIDE_RUNTIME
  DECIDE_RUNTIME -->|spring_boot| SB
  DECIDE_RUNTIME -->|quarkus| QK
  DECIDE_RUNTIME -->|plain_java| PJ

  SB --> SB_STEPS --> PROM_CFG
  QK --> QK_STEPS --> PROM_CFG
  PJ --> PJ_STEPS --> PROM_CFG

  PROM_CFG --> END
Loading

File-Level Changes

Change Details Files
Introduce a new guide detailing how to expose application metrics and configure Prometheus scraping for different Java runtimes, including OpenShift-specific setup.
  • Document prerequisites for metrics exposure including Micrometer/JMX usage, Prometheus scraping, and OpenShift user workload monitoring requirements.
  • Provide step-by-step instructions to enable user workload monitoring in OpenShift via the cluster-monitoring-config ConfigMap and verify related pods.
  • Describe Spring Boot setup for Prometheus metrics using Actuator and Micrometer, including dependencies, properties, endpoint, and scrape target.
  • Describe Quarkus setup for Micrometer Prometheus registry, including dependency, configuration flags, endpoint, and scrape target.
  • Describe metrics exposure for plain Java applications via Prometheus JMX Exporter, including javaagent invocation, endpoint, and scrape target.
  • Provide a sample Prometheus scrape configuration with notes on adapting metrics_path, targets, and scrape_interval for different application types.
docs/application_metrics_exposure.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • In the OpenShift section, clarify that user application namespaces typically need ServiceMonitor/PodMonitor resources rather than direct Prometheus scrape_config snippets, and show how this ties into the example metrics endpoints.
  • Fix minor wording/formatting issues (e.g., Monitoring for user workloads will be enabled automatically.Verify missing a space, and ensure consistent code block language hints for YAML/XML/Properties where appropriate).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In the OpenShift section, clarify that user application namespaces typically need ServiceMonitor/PodMonitor resources rather than direct Prometheus `scrape_config` snippets, and show how this ties into the example metrics endpoints.
- Fix minor wording/formatting issues (e.g., `Monitoring for user workloads will be enabled automatically.Verify` missing a space, and ensure consistent code block language hints for YAML/XML/Properties where appropriate).

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Under Review

Development

Successfully merging this pull request may close these issues.

2 participants