Skip to content

Conversation

@philippfromme
Copy link
Contributor

Proposed Changes

Note that one aspect of how long task testing takes is the interval we're using to poll: https://github.com/camunda/task-testing/blob/main/lib/TaskExecution.js#L29 A second is quite a long interval.

Related to camunda/camunda-modeler#5487

Checklist

Ensure you provide everything we need to review your contribution:

  • Your contribution meets the definition of done
  • Any new additions or modifications are consistent with the existing UI and UX patterns
  • Pull request description establishes context:
    • Link to related issue(s), i.e. Closes {LINK_TO_ISSUE} or Related to {LINK_TO_ISSUE}
    • Brief textual description of the changes
    • Screenshots or short videos showing UI/UX changes
    • Steps to try out, i.e. using the @bpmn-io/sr tool

@jarekdanielak
Copy link
Contributor

I'm not sure we need to add an internal property for that.

We can just add timestamps to start and end events, and let the consumer do the thing. WDYT?

@philippfromme
Copy link
Contributor Author

We can just add timestamps to start and end events, and let the consumer do the thing. WDYT?

Problem with that is that the events would need unique IDs so they can be correlated to calculate durations. Not sure if that would be better.

@jarekdanielak
Copy link
Contributor

I might be overlooking the unique ID thing. Is such approach too naive?

Task testing consumer:

const handleTaskExecutionStarted = (event) => {
  const { timestamp } = event;
  this.taskExecutionStartTime = timestamp;
}

const handleTaskExecutionFinished = (event) => {
  const { element, timestamp } = event;
  const duration = timestamp - this.taskExecutionStartTime;

  console.log('Task', element.id, 'executed in', duration, 'ms');
}

@jarekdanielak
Copy link
Contributor

Either way, we should probably measure separately for deployment, starting instance and the actual execution.

@philippfromme
Copy link
Contributor Author

Either way, we should probably measure separately for deployment, starting instance and the actual execution.

@nikku Curious to hear your perspective as well in terms of what would be useful.

@nikku
Copy link
Member

nikku commented Dec 10, 2025

I propose we measure anything that is not "end user wait time".

Goal: Understand where perceived slowness is coming from.


Related question - how do we report this? Does sentry have a solution for such performance tracing that we can use?

@philippfromme
Copy link
Contributor Author

I propose we measure anything that is not "end user wait time".

Not sure what exactly that means. What is end user wait time?

Sentry seems to have a feature that we can use for that https://docs.sentry.io/product/insights/overview/.

@philippfromme philippfromme added the backlog Queued in backlog label Jan 13, 2026 — with bpmn-io-tasks
@philippfromme philippfromme removed the in progress Currently worked on label Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backlog Queued in backlog

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants