Releases: brainlid/langchain
Releases · brainlid/langchain
v0.5.2
What's Changed
- Prep for v0.5.1 by @brainlid in #454
- fix(ChatVertexAI): Fix tool calls for ChatVertexAI and support for Gemini 3 models by @abh1shek-sh in #452
- Revising tool detection by @brainlid in #458
- prep for v0.5.2 release by @brainlid in #459
Full Changelog: v0.5.1...v0.5.2
v0.5.1
What's Changed
- fix(ChatMistralAI): Handle broken tool names that kill LLMChain by @xu-chris in #448
- Handle thought content parts from Google Vertex by @mattmatters in #430
- feat: Verify function parameters before executing, reject faulty message deltas by @xu-chris in #449
- fix(ChatOpenAIResponses): Handle failed response status from OpenAI Responses API by @xu-chris in #450
- Expanding callbacks for tool detection and UI feedback by @brainlid in #453
Full Changelog: v0.5.0...v0.5.1
v0.5.0
What's Changed
- Add
req_configtoChatOpenAIResponsesby @xxdavid in #415 - Add thinking config to vertex ai by @mattmatters in #423
- Add support for OpenAI Response API Stateful context by @cjimison in #425
- Fixes image file_id content type for ChatOpenAIResponses by @sezaru in #438
- fix(ChatMistralAI): missing error handling and fallback mechanism on server outages (#434) by @xu-chris in #435
- feat(ChatMistralAI): add support for parallel tool calls by @xu-chris in #433
- use elixir 1.17 by @nbw in #427
- feat(GoogleChatAI): add thought_signature support for Gemini 3 function calls by @abh1shek-sh in #431
- Don't include top_p for gpt-5.2+ in ChatOpenAIResponses by @montebrown in #428
- Fix Mistral 'thinking' content parts by @arjan in #418
- Add verbose_api field to ChatPerplexity and ChatMistralAI by @arjan in #416
- Add support for OpenAI reasoning/thinking events in ChatOpenAIResponses by @arjan in #421
- Add new reasoning effort values to ChatOpenAIResponses by @xxdavid in #419
- fix "Support reasoning_content of deepseek model" introducing UI bug … by @kayuapi in #429
- Support json schema in vertex ai by @mattmatters in #424
- Base work for new agent library by @brainlid in #442
- fix ChatGrok tool call arguments and message flattening by @KristerV in #420
- Revert "fix ChatGrok tool call arguments and message flattening" by @brainlid in #445
- prep for v0.5.0 release by @brainlid in #451
New Contributors
- @cjimison made their first contribution in #425
- @sezaru made their first contribution in #438
- @xu-chris made their first contribution in #435
- @nbw made their first contribution in #427
- @abh1shek-sh made their first contribution in #431
- @kayuapi made their first contribution in #429
- @KristerV made their first contribution in #420
Full Changelog: v0.4.1...v0.5.0
v0.4.1
What's Changed
- OpenAI responses API improvements by @arjan in #391
- Support Anthropic
disable_parallel_tool_usetool_choice setting by @vlymar in #390 - Update gettext dependency version to 1.0 by @bijanbwb in #393
- Add DeepSeek chat model integration by @gilbertwong96 in #394
- Loosen the gettext dependency by @montebrown in #399
- add MessageDelta.merge_deltas/2 by @brainlid in #401
- formatting update by @brainlid in #402
- Added an :cache_messages option for ChatAnthropic, can improve cache utilization. by @montebrown in #398
- Add support for Anthropic API PDF reading to the ChatAnthropic model. by @jadengis in #403
- feat: Add support for
:file_urlto ChatAnthropic too by @jadengis in #404 - Support reasoning_content of deepseek model by @gilbertwong96 in #407
- Add req_opts to ChatAnthropic by @stevehodgkiss in #408
- Open AI Responses API: Add support for file_url with link to file by @reetou in #395
- Add strict tool use support to ChatAnthropic by @stevehodgkiss in #409
- Add strict to function of ChatModels.ChatOpenAI by @nallwhy in #301
- Allow multi-part tool responses. by @montebrown in #410
- prep for v0.4.1 release by @brainlid in #411
New Contributors
- @vlymar made their first contribution in #390
- @bijanbwb made their first contribution in #393
- @gilbertwong96 made their first contribution in #394
- @reetou made their first contribution in #395
Full Changelog: v0.4.0...v0.4.1
v0.4.0
What's Changed since v0.3.3
- Add OpenAI and Claude thinking support - v0.4.0-rc.0 by @brainlid in #297
- vertex ai file url support by @ahsandar in #296
- Update docs for Vertex AI by @ahsandar in #304
- Fix ContentPart migration by @mathieuripert in #309
- Fix tests for content_part_for_api/2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in #300
- Fix
tool_callsnilmessages by @udoschneider in #314 - feat: Add structured output support to ChatMistralAI by @mathieuripert in #312
- feat: add configurable tokenizer to text splitters by @mathieuripert in #310
- simple formatting issue by @Bodhert in #307
- Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in #315
- Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in #316
- feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in #317
- expanded docs and test coverage for prompt caching by @brainlid in #325
- Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in #327
- significant updates for v0.4.0-rc.1 by @brainlid in #328
- filter out empty lists in message responses by @brainlid in #333
- fix: Require gettext ~> 0.26 by @mweidner037 in #332
- Add
retry: transientto Req for Anthropic models in stream mode by @jonator in #329 - fixed issue with poorly matching list in case by @brainlid in #334
- feat: Add organization ID as a parameter by @hjemmel in #337
- Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in #341
- Added usage data to the VertexAI Message response. by @raulchedrese in #335
- feat: add run mode: step by @CaiqueMitsuoka in #343
- feat: add support for multiple tools in run_until_tool_used by @fortmarek in #345
- Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in #342
- expanded logging for ChatAnthropic API errors by @brainlid in #349
- Prevent crash when ToolResult with string in ChatGoogleAI.for_api/1 by @nallwhy in #352
- Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in #356
- added xAI Grok chat model support by @alexfilatov in #338
- Support thinking to ChatGoogleAI by @nallwhy in #354
- Add req_config to ChatMode.ChatGoogleAI by @nallwhy in #357
- Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in #353
- Expose full response headers through a new on_llm_response_headers callback by @brainlid in #358
- only include "user" with OpenAI request when a value is provided by @brainlid in #364
- Handle no content parts responses in ChatGoogleAI by @nallwhy in #365
- Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in #360
- Pref for release v0.4.0-rc.2 by @brainlid in #366
- fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in #367
- Add support for native tool calls to ChatVertexAI by @raulchedrese in #359
- Adds should_continue? optional function to mode step by @CaiqueMitsuoka in #361
- Add OpenAI Deep Research integration by @fbettag in #336
- Add
parallel_tool_callsoption toChatOpenAImodel by @martosaur in #371 - Add optional AWS session token handling in BedrockHelpers by @quangngd in #372
- fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in #368
- Add Orq AI chat by @arjan in #377
- Add req_config to ChatModels.ChatOpenAI by @koszta in #376
- fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in #373
- fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in #374
- feat(ChatGoogleAI): Full thinking config by @mweidner037 in #375
- Support verbosity parameter for ChatOpenAI by @rohan-b99 in #379
- add retry_on_fallback? to chat model definition and all models by @brainlid in #350
- Prep for v0.4.o-rc.3 by @brainlid in #380
- Use moduledoc instead of doc for LLMChain documentation by @xxdavid in #384
- Support OTP 28 in CI by @kianmeng in #382
- OpenAI responses by @vasspilka in #381
- Add AGENTS.md and CLAUDE.md file support by @brainlid in #385
- Suppress the compiler warning messages for ChatBumblebee by @brainlid in #386
- fix: Support for json-schema in OpenAI responses API by @vasspilka in #387
- Prepare for v0.4.0 release by @brainlid in #388
New Contributors
- @ahsandar made their first contribution in #296
- @mathieuripert made their first contribution in #309
- @udoschneider made their first contribution in #314
- @Bodhert made their first contribution in #307
- @rtorresware made their first contribution in #315
- @TwistingTwists made their first contribution in #317
- @mweidner037 made their first contribution in #332
- @jonator made their first contribution in #329
- @hjemmel made their first contribution in #337
- @gur-xyz made their first contribution in #341
- @CaiqueMitsuoka made their first contribution in #343
- @fortmarek made their first contribution in #345
- @alexfilatov made their first contribution in #338
- @Ven109 made their first contribution in #360
- @martosaur made their first contribution in #371
- @quangngd made their first contribution in #372
- @arjan made their first contribution in #377
- @koszta made their first contribution in #376
- @rohan-b99 made their first contribution in #379
- @xxdavid made their first contribution in #384
Full Changelog: v0.3.3...v0.4.0
v0.4.0-rc.3
What's Changed
- fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in #367
- Add support for native tool calls to ChatVertexAI by @raulchedrese in #359
- Adds should_continue? optional function to mode step by @CaiqueMitsuoka in #361
- Add OpenAI Deep Research integration by @fbettag in #336
- Add
parallel_tool_callsoption toChatOpenAImodel by @martosaur in #371 - Add optional AWS session token handling in BedrockHelpers by @quangngd in #372
- fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in #368
- Add Orq AI chat by @arjan in #377
- Add req_config to ChatModels.ChatOpenAI by @koszta in #376
- fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in #373
- fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in #374
- feat(ChatGoogleAI): Full thinking config by @mweidner037 in #375
- Support verbosity parameter for ChatOpenAI by @rohan-b99 in #379
- add retry_on_fallback? to chat model definition and all models by @brainlid in #350
- Prep for v0.4.o-rc.3 by @brainlid in #380
New Contributors
- @martosaur made their first contribution in #371
- @quangngd made their first contribution in #372
- @arjan made their first contribution in #377
- @koszta made their first contribution in #376
- @rohan-b99 made their first contribution in #379
Full Changelog: v0.4.0-rc.2...v0.4.0-rc.3
v0.4.0-rc.2
What's Changed
- filter out empty lists in message responses by @brainlid in #333
- fix: Require gettext ~> 0.26 by @mweidner037 in #332
- Add
retry: transientto Req for Anthropic models in stream mode by @jonator in #329 - fixed issue with poorly matching list in case by @brainlid in #334
- feat: Add organization ID as a parameter by @hjemmel in #337
- Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in #341
- Added usage data to the VertexAI Message response. by @raulchedrese in #335
- feat: add run mode: step by @CaiqueMitsuoka in #343
- feat: add support for multiple tools in run_until_tool_used by @fortmarek in #345
- Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in #342
- expanded logging for ChatAnthropic API errors by @brainlid in #349
- Prevent crash when ToolResult with string in ChatGoogleAI.for_api/1 by @nallwhy in #352
- Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in #356
- added xAI Grok chat model support by @alexfilatov in #338
- Support thinking to ChatGoogleAI by @nallwhy in #354
- Add req_config to ChatMode.ChatGoogleAI by @nallwhy in #357
- Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in #353
- Expose full response headers through a new on_llm_response_headers callback by @brainlid in #358
- only include "user" with OpenAI request when a value is provided by @brainlid in #364
- Handle no content parts responses in ChatGoogleAI by @nallwhy in #365
- Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in #360
- Pref for release v0.4.0-rc.2 by @brainlid in #366
New Contributors
- @mweidner037 made their first contribution in #332
- @jonator made their first contribution in #329
- @hjemmel made their first contribution in #337
- @gur-xyz made their first contribution in #341
- @CaiqueMitsuoka made their first contribution in #343
- @fortmarek made their first contribution in #345
- @alexfilatov made their first contribution in #338
- @Ven109 made their first contribution in #360
Full Changelog: v0.4.0-rc.1...v0.4.0-rc.2
v0.4.0-rc.1
Refer to the CHANGELOG.md for notes on breaking changes and migrating.
What's Changed
- vertex ai file url support by @ahsandar in #296
- Update docs for Vertex AI by @ahsandar in #304
- Fix ContentPart migration by @mathieuripert in #309
- Fix tests for content_part_for_api/2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in #300
- Fix
tool_callsnilmessages by @udoschneider in #314 - feat: Add structured output support to ChatMistralAI by @mathieuripert in #312
- feat: add configurable tokenizer to text splitters by @mathieuripert in #310
- simple formatting issue by @Bodhert in #307
- Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in #315
- Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in #316
- feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in #317
- expanded docs and test coverage for prompt caching by @brainlid in #325
- Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in #327
- significant updates for v0.4.0-rc.1 by @brainlid in #328
New Contributors
- @ahsandar made their first contribution in #296
- @mathieuripert made their first contribution in #309
- @udoschneider made their first contribution in #314
- @Bodhert made their first contribution in #307
- @rtorresware made their first contribution in #315
- @TwistingTwists made their first contribution in #317
Full Changelog: v0.4.0-rc.0...v0.4.0-rc.1
v0.4.0-rc.0
What's Changed
Introduces breaking changes while adding expanded support for thinking models.
NOTE: See the CHANGELOG.md for more details
IMPORTANT: Not all models are supported with this RC.
Full Changelog: v0.3.3...v0.4.0-rc.0
v0.3.3
What's Changed
- upgrade gettext and migrate by @brainlid in #271
- Support caching tool results for Anthropic calls by @ci in #269
- Fix OpenAI verbose_api by @aaparmeggiani in #274
- Support choice of Anthropic beta headers by @ci in #273
- Fix specifying media uris for google vertex by @mattmatters in #242
- feat: add support for pdf content with OpenAI model by @bwan-nan in #275
- feat: File urls for Google by @vasspilka in #286
- support streaming responses from mistral by @manukall in #287
- Support for json_response in ChatModels.ChatGoogleAI by @nallwhy in #277
- Fix options being passed to the ollama chat api by @alappe in #179
- Support for file with file_id in ChatOpenAI by @nallwhy in #283
- added LLMChain.run_until_tool_used/3 by @brainlid in #292
- adds telemetry by @epinault in #284
New Contributors
- @ci made their first contribution in #269
- @aaparmeggiani made their first contribution in #274
- @mattmatters made their first contribution in #242
- @vasspilka made their first contribution in #286
- @manukall made their first contribution in #287
- @epinault made their first contribution in #284
Full Changelog: v0.3.2...v0.3.3