Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

Summary

Fixes #4319

When the same agent is reused across multiple sequential tasks (common pattern in Flow with @listen decorators), the agent executor's message history was not being cleared between tasks. This caused messages to accumulate, leading to duplicate system messages, context pollution, and eventually crashes with "Invalid response from LLM call - None or empty".

The fix adds two lines to _update_executor_parameters() in agent/core.py to clear the messages list and reset the iterations counter when the agent executor is reused for a new task.

Review & Testing Checklist for Human

  • Verify clearing messages is always correct: Confirm there's no scenario where preserving message history between tasks is intentional behavior. The fix unconditionally clears messages whenever _update_executor_parameters is called.
  • Test with real Flow pattern: The unit tests verify state is cleared but don't run actual LLM calls. Recommend testing with a real Flow using @listen decorators where the same agent executes multiple sequential tasks.
  • Check for regressions in existing agent reuse patterns: Run any existing integration tests that involve agent reuse across crews/tasks.

Suggested manual test:

from crewai import Agent, Task, Crew

agent = Agent(role="Test", goal="Test", backstory="Test", allow_delegation=False)

for i in range(3):
    task = Task(description=f"Task {i}", expected_output="Result", agent=agent)
    crew = Crew(agents=[agent], tasks=[task])
    result = crew.kickoff()
    print(f"Task {i} completed, messages count: {len(agent.agent_executor.messages)}")

Messages should reset to a consistent count after each task, not accumulate.

Notes

Fixes #4319

When the same agent is reused across multiple sequential tasks (common
pattern in Flow with @listen decorators), the agent executor's message
history was not cleared between tasks. This caused messages to accumulate,
leading to:
- Duplicate system messages
- Context pollution
- Eventually crashes with 'Invalid response from LLM call - None or empty'

The fix clears the messages list and resets the iterations counter in
_update_executor_parameters() when the agent executor is reused for a
new task.

Added tests to verify:
- Messages are cleared when agent executor is reused between tasks
- Iterations counter is reset
- State isolation works across multiple crew kickoffs

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Agent executor accumulates messages when reused across sequential tasks

0 participants