add llm_request event (or onPayload) #1693
linktohack
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This helps us debug the payload sent to the provider.
The change can be very simple
Then we can have an extension to log the request and response (existing today) from provider
PR is ready and working fine for me for a few weeks now (was very helpful when I wanted to debug llama.cpp chat parser)
Beta Was this translation helpful? Give feedback.
All reactions