The Tiger Slack MCP Server is an AI-accessible interface that provides LLMs (like Claude) with powerful tools for querying and analyzing Slack workspace data. Built using the Model Context Protocol, it acts as a bridge between AI assistants and your Slack database, enabling intelligent conversation analysis, user lookup, and workspace insights.
The MCP server exposes a focused set of tools that allow AI assistants to:
- Browse Workspace Structure: List channels and users with intelligent filtering and search
- Analyze Conversations: Retrieve recent messages from specific channels with full threading context
- Track User Activity: Find conversations involving specific users across the workspace
- Follow Message Threads: Access complete threaded conversations with replies and context
- Generate Slack Links: Create permalinks to messages for easy navigation
- Real-time Data Access: Queries live data from your TimescaleDB-backed Slack database
- Thread-aware Analysis: Preserves conversation context and reply relationships
- User-friendly Output: Formats messages with usernames, timestamps, and threading structure
- Flexible Transport: Supports both stdio and HTTP transport modes
- Observability Ready: Built-in OpenTelemetry instrumentation for monitoring and debugging
- Type-safe API: Full TypeScript implementation with Zod schema validation
The server connects directly to your Tiger Slack database (populated by the ingest service) and provides these core APIs:
getChannels- List workspace channels with optional keyword filteringgetUsers- List workspace users with profile information and searchgetRecentConversationsInChannel- Fetch recent messages from a specific channelgetRecentConversationsWithUser- Find conversations involving a specific usergetThreadMessages- Retrieve all messages in a specific threadgetMessageContext- Get contextual messages around a specific message
Each API is designed to provide rich, structured data that AI assistants can easily understand and work with, making Slack data accessible for analysis, search, and insight generation.
Cloning and running the server locally.
git clone git@github.com:timescale/tiger-slack.gitRun ./bun i to install dependencies and build the project. Use ./bun run watch http to rebuild on changes.
Create a .env file based on the .env.sample file.
cp .env.sample .envThe MCP Inspector is very handy.
./bun run inspector| Field | Value |
|---|---|
| Transport Type | STDIO |
| Command | node |
| Arguments | dist/index.js |
Create/edit the file ~/Library/Application Support/Claude/claude_desktop_config.json to add an entry like the following, making sure to use the absolute path to your local tiger-slack-mcp-server project, and real database credentials.
{
"mcpServers": {
"tiger-slack": {
"command": "node",
"args": [
"/absolute/path/to/tiger-slack-mcp-server/dist/index.js",
"stdio"
],
"env": {
"PGHOST": "x.y.tsdb.cloud.timescale.com",
"PGDATABASE": "tsdb",
"PGPORT": "32467",
"PGUSER": "tsdbadmin",
"PGPASSWORD": "abc123"
}
}
}
}This project uses ESLint for code linting with TypeScript support.
To run the linter:
./bun run lintTo automatically fix linting issues where possible:
./bun run lint:fixThe Tiger Slack MCP Server includes comprehensive observability through Logfire and OpenTelemetry, providing real-time monitoring of API calls, database queries, and system performance.
The MCP server automatically instruments:
- Tool calls with input parameters and response data
- Session management including connection lifecycle
- Transport layer (stdio/HTTP) with connection details
- Error handling with full stack traces and context
- PostgreSQL queries with query text, parameters, and timing
- Connection pooling operations and resource usage
- Query performance metrics and slow query detection
- Database errors with detailed diagnostic information
- HTTP requests (when using HTTP transport)
- Memory usage and garbage collection metrics
- CPU utilization during query processing
- Response times for all API endpoints
- Sign up at https://logfire.pydantic.dev/
- Create a new project for your MCP server deployment
- Note your project tokens for configuration
Add these variables to your .env file:
# Logfire configuration
LOGFIRE_TOKEN="pylf_..." # Write token for sending traces/logs
LOGFIRE_ENVIRONMENT="dev" # Logical environment (dev/staging/prod)
# Optional: Custom service configuration
SERVICE_NAME="tiger-slack-mcp" # Service name in traces (default)
SERVICE_VERSION="0.1.0" # Service version (default from package.json)
# Optional: Custom endpoints (defaults shown)
LOGFIRE_TRACES_ENDPOINT="https://logfire-api.pydantic.dev/v1/traces"
LOGFIRE_LOGS_ENDPOINT="https://logfire-api.pydantic.dev/v1/logs"Set the instrumentation flag to enable OpenTelemetry:
# Enable OpenTelemetry instrumentation
INSTRUMENT=trueImportant: The MCP server only enables instrumentation when INSTRUMENT=true is set, preventing unnecessary overhead in production environments where observability isn't needed.
If you prefer to use a different observability backend, the server supports standard OpenTelemetry configuration:
# OpenTelemetry configuration (alternative to Logfire)
OTEL_EXPORTER_OTLP_ENDPOINT="https://your-otel-collector:4318"
OTEL_EXPORTER_OTLP_HEADERS="authorization=Bearer your-token"
OTEL_SERVICE_NAME="tiger-slack-mcp"
OTEL_SERVICE_VERSION="0.1.0"
OTEL_RESOURCE_ATTRIBUTES="environment=production"
# Enable instrumentation
INSTRUMENT=true