As described by @belozierov in #98 (comment):
During an LLM request, the model may perform many turns, producing intermediate outputs such as reasoning, user-visible messages, tool calls, etc. Ideally, these intermediate steps should be observable by the user via LanguageModelSession.
For this to work, it would be useful to update LanguageModelSession.transcript while LanguageModel.streamResponse is running.
One possible approach would be for LanguageModel.streamResponse to return not just a stream of Content, but an enum that can represent either Content or Transcript.Entry, where Transcript.Entry values would be appended to LanguageModelSession.transcript during the stream.