You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: The fast_mcp extension brings Model Context Protocol (MCP) support to `llms.py`, allowing you to extend LLM capabilities with a wide range of external tools and services.
4
+
---
5
+
6
+
## Install
7
+
8
+
Add the [fast_mcp](https://github.com/llmspy/fast_mcp) extension to enable MCP support using the [FastMCP Python Framework](https://gofastmcp.com):
9
+
10
+
<ShellCommand>llms --add fast_mcp</ShellCommand>
11
+
12
+
## Features
13
+
14
+
-**Standardized Tool Access**: Connect to any MCP-compliant server (Node.js, Python, etc.) seamlessly.
15
+
-**Dynamic Discovery**: Automatically discovers and registers all tools exposed by the configured servers.
16
+
-**Parallel Discovery**: All configured MCP servers are discovered concurrently for fast startup times.
17
+
-**Deterministic Registration**: Tools are registered in configuration order; if multiple servers provide tools with the same name, the later server in the config overrides earlier ones.
18
+
19
+
## Configuration
20
+
21
+
The extension manages MCP servers via a `mcp.json` configuration file in the following locations:
|`command`| string | Yes | The executable to run (e.g., `npx`, `uvx`, `uv`, `python`) |
33
+
|`args`| array | No | Command-line arguments passed to the command |
34
+
|`env`| object | No | Environment variables to set for the server process |
35
+
|`timeout`| number | No | Timeout in seconds for tool execution |
36
+
|`description`| string | No | A human-readable description of the server |
37
+
38
+
### Environment Variable Substitution
39
+
40
+
To allow for flexible and shared configurations, you can reference environment variables using the `$` prefix in both `args` and `env` values, e.g:
41
+
42
+
-`$PWD` - Current working directory
43
+
-`$GEMINI_API_KEY` - Any environment variable
44
+
45
+
**Selective Registration**: MCP servers are only registered if **all** referenced environment variables are available. If any variable is missing, that server is skipped during discovery. This allows you to maintain a single shared config with optional servers.
46
+
47
+
### Example `mcp.json`
48
+
49
+
```json
50
+
{
51
+
"mcpServers": {
52
+
"filesystem": {
53
+
"description": "Anthropic's MCP Server for secure filesystem operations",
54
+
"command": "npx",
55
+
"args": [
56
+
"-y",
57
+
"@modelcontextprotocol/server-filesystem",
58
+
"$PWD"
59
+
]
60
+
},
61
+
"git": {
62
+
"description": "Provides tools to read, search, and manipulate Git repositories",
63
+
"command": "uvx",
64
+
"args": [
65
+
"mcp-server-git",
66
+
"--repository",
67
+
"$PWD"
68
+
]
69
+
},
70
+
"gemini-gen": {
71
+
"description": "Gemini Image and Audio TTS generation",
72
+
"command": "uvx",
73
+
"args": ["gemini-gen-mcp"],
74
+
"env": {
75
+
"GEMINI_API_KEY": "$GEMINI_API_KEY"
76
+
}
77
+
}
78
+
}
79
+
}
80
+
```
81
+
82
+
## Tools UI
83
+
84
+
Information about all discovered MCP servers and their registered tools is available in the **Tools** page under the **MCP Servers** section. By default only Anthropic's [Filesystem MCP Server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) is configured.
85
+
86
+
<Screenshotsrc="/img/mcp-servers-empty.webp" />
87
+
88
+
You can either edit the `mcp.json` file directly to add your own servers or use the UI to **Add**, **Edit**, or **Delete** servers or use the **Copy** button to copy an individual server's configuration.
After adding servers and restarting the application, the Tools page will display all discovered servers and their registered tools.
96
+
97
+
<Screenshotsrc="/img/mcp-servers.webp" />
98
+
99
+
## Executing Tools
100
+
101
+
Fundamentally MCP Servers are a standardized way to expose external tools to LLMs. Once MCP servers are configured and their tools discovered, LLMs can invoke them during chat sessions like any other tool.
102
+
103
+
An MCP Tools are grouped under their tool name, making it easy to identify, enable or disable them for each chat session. You can execute tools directly from the Tools page by clicking the **Execute** button next to each tool, filling out the required parameters in the dialog, and clicking **Run Tool**.
104
+
105
+
<Screenshotsrc="/img/tools-exec.webp" />
106
+
107
+
#### Results
108
+
109
+
Upon execution, the tool's output is displayed in a results dialog with specific rendering based on the output type:
110
+
111
+
<Screenshotsrc="/img/tools-exec-results.webp" />
112
+
113
+
### Chat Sessions
114
+
115
+
When included, the same tools can be also be invoked indirectly by LLMs during chat sessions:
Tool outputs containing HTML content are rendered within an `<iframe>` within the results dialog which safely sandboxes the content whilst letting you interact with it and play games like Tetris from the arguments or output of a tool call:
122
+
123
+
<Screenshotsrc="/img/tools-chat-tetris.webp" />
124
+
125
+
### Top Panel Tools Selector
126
+
127
+
-**One-Click Enable/Disable**: Use the new Tool Selector in the chat interface (top-right) to control which tools are available to the model
128
+
-**Granular Control**: Select **all**, **none** per group or globally, or individual tools for each chat session
129
+
130
+
<Screenshotsrc="/img/llms-tools-top.webp" />
131
+
132
+
When tools are used within AI Requests a special UI is used to render tool calls and responses.
133
+
134
+
135
+
## How It Works
136
+
137
+
### Discovery Phase (Startup)
138
+
139
+
1. The extension loads `mcp.json` and filters out servers with missing environment variables
140
+
2. All valid servers are discovered **in parallel**
141
+
3. Each server is started, queried for its available tools via `list_tools()`
142
+
4. Tools are registered in **config order** (deterministic - later servers override earlier ones for duplicate tool names)
143
+
144
+
### Execution Phase (Runtime)
145
+
146
+
When a tool is invoked:
147
+
148
+
1. A **fresh connection** is established to the appropriate MCP server
149
+
2. The tool is executed with the provided arguments (configurable timeout, default 60s)
150
+
3. The connection is closed after execution
151
+
152
+
This fresh-connection-per-execution approach ensures reliability and isolation between tool calls.
153
+
154
+
## Environment Variables
155
+
156
+
| Variable | Default | Description |
157
+
|----------|---------|-------------|
158
+
|`MCP_TIMEOUT`|`60.0`| Timeout in seconds for MCP tool execution |
159
+
|`MCP_LOG_ERRORS`|`0`| Set to `1` to enable detailed stderr logging for tool execution |
160
+
161
+
## Troubleshooting
162
+
163
+
If tools are not appearing:
164
+
165
+
- Check that the MCP server command is accessible in your `PATH`
166
+
- Verify that all required environment variables are exported
167
+
- Enable detailed error logging with `MCP_LOG_ERRORS=1`
168
+
- Review the logs in the `logs/` directory for specific error messages
169
+
170
+
If tools are timing out:
171
+
172
+
- Increase the timeout with `MCP_TIMEOUT=120` (or higher value in seconds)
173
+
174
+
### Log Files
175
+
176
+
Logs are stored in the extension's `logs/` directory:
177
+
178
+
| Log File | Description |
179
+
|----------|-------------|
180
+
|`{server}_discovery.stderr.log`| Stderr output from server during discovery phase |
181
+
|`{tool_name}.stderr.log`| Stderr output from tool execution (when `MCP_LOG_ERRORS=1`) |
182
+
183
+
## Requirements
184
+
185
+
- Python 3.9+ (for dict insertion order guarantee)
0 commit comments