If you find this useful, please β the repo!
Latest app release --> https://github.com/scouzi1966/maclocal-api/releases/tag/v0.9.3
Tip
What's new in v0.9.3 --> afm -w -g enables WebUI + API gateway mode. Auto-discovers and proxies to Ollama, LM Studio, Jan, and other local LLM backends. Reasoning model support (Qwen, DeepSeek, gpt-oss).
Truly a killer feature. -g is a new Gateway mode which will aggregate and proxy all your locally running model servers from Ollama, llama-server, LM Studio, Jan , others and expose a single API for all on default port 9999! Combined with -w (afm -wg), you'll instantly gain access to all your models served on your machine in a single Web interface with very little setup friction. Please comment for feature requests, bugs anything! I hope you're enjoying this app. Star if you are.
Tip
brew tap scouzi1966/afm
brew install afm
brew upgrade afm (From an earlier install with brew)
single command
brew install scouzi1966/afm/afmpip install macafmTo start a webchat:
afm -w
Tip
pip install macafm
pip install --upgrade macafm (from an earlier install with pip)MacLocalAPI is the repo for the afm command on macOS 26 Tahoe. The afm command (cli) allows one to access the on-device Apple LLM Foundation model from the command line in a single prompt or in API mode. It allows integration with other OS command line tools using standard Unix pipes.
Additionally, it contains a built-in server that serves the on-device Foundation Model with the OpenAI standard SDK through an API. You can use the model with another front end such as Open WebUI. By default, launching the simple 'afm' command starts a server on port 9999 immediately! Simple, fast.
Note: afm command supports trained adapters using Apple's Toolkit: https://developer.apple.com/apple-intelligence/foundation-models-adapter/
I have also created a wrapper tool to make the fine-tuning AFM easier on both Macs M series and Linux with CUDA using Apple's provided LoRA toolkit.
Get it here: https://github.com/scouzi1966/AFMTrainer
You can also explore a pure and private MacOS chat experience (non-cli) here: https://github.com/scouzi1966/vesta-mac-dist
Chose ONE of 2 methods to install (Homebrew or pip):
# Add the tap (first time only)
brew tap scouzi1966/afm
# Install or upgrade AFM
brew install afm
# OR upgrade existing:
brew upgrade afm
# Verify installation
afm --version # Should show latest release
# Brew workaround If you are having issues upgrading, Try the following:
brew uninstall afm
brew untap scouzi1966/afm
# Then try againpip install macafm
# Verify installation
afm --versionHOW TO USE afm:
# Start the API server only (Apple Foundation Model on port 9999)
afm
# Start the API server with WebUI chat interface
afm -w
# Start with WebUI and API gateway (auto-discovers Ollama, LM Studio, Jan, etc.)
afm -w -g
# Start on a custom port with a trained LoRA adapter
afm -a ./my_adapter.fmadapter -p 9998
# Use in single prompt mode
afm -i "you are a pirate, you only answer in pirate jargon" -s "Write a story about Einstein"
# Use in single prompt mode with adapter
afm -s "Write a story about Einstein" -a ./my_adapter.fmadapter
# Use in pipe mode
ls -ltr | afm -i "list the files only of ls output"A very simple to use macOS server application that exposes Apple's Foundation Models through OpenAI-compatible API endpoints. Run Apple Intelligence locally with full OpenAI API compatibility. For use with Python, JS or even open-webui (https://github.com/open-webui/open-webui).
With the same command, it also supports single mode to interact the model without starting the server. In this mode, you can pipe with any other command line based utilities.
As a bonus, both modes allows the use of using a LoRA adapter, trained with Apple's toolkit. This allows to quickly test them without having to integrate them in your app or involve xCode.
The magic command is afm
- π OpenAI API Compatible - Works with existing OpenAI client libraries and applications
- β‘ LoRA adapter support - Supports fine-tuning with LoRA adapters using Apple's tuning Toolkit
- π± Apple Foundation Models - Uses Apple's on-device 3B parameter language model
- π Privacy-First - All processing happens locally on your device
- β‘ Fast & Lightweight - No network calls, no API keys required
- π οΈ Easy Integration - Drop-in replacement for OpenAI API endpoints
- π Token Usage Tracking - Provides accurate token consumption metrics
- **macOS 26 (Tahoe) or later
- Apple Silicon Mac (M1/M2/M3/M4 series)
- Apple Intelligence enabled in System Settings
- **Xcode 26 (for building from source)
# Add the tap
brew tap scouzi1966/afm
# Install AFM
brew install afm
# Verify installation
afm --version# Install from PyPI
pip install macafm
# Verify installation
afm --version# Clone the repository (includes llama.cpp WebUI as a submodule)
git clone --recursive https://github.com/scouzi1966/maclocal-api.git
cd maclocal-api
# Build the WebUI (requires Node.js)
cd vendor/llama.cpp/tools/server/webui
npm install && npm run build
cd ../../../../..
mkdir -p Resources/webui
cp vendor/llama.cpp/tools/server/public/index.html.gz Resources/webui/
# Build the project
swift build -c release
# Run
./.build/release/afm --version# API server only (Apple Foundation Model on port 9999)
afm
# API server with WebUI chat interface
afm -w
# WebUI + API gateway (auto-discovers Ollama, LM Studio, Jan, etc.)
afm -w -g
# Custom port with verbose logging
afm -p 8080 -v
# Show help
afm -hPOST /v1/chat/completions
Compatible with OpenAI's chat completions API.
curl -X POST http://localhost:9999/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "foundation",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'GET /v1/models
Returns available Foundation Models.
curl http://localhost:9999/v1/modelsGET /health
Server health status endpoint.
curl http://localhost:9999/healthfrom openai import OpenAI
# Point to your local MacLocalAPI server
client = OpenAI(
api_key="not-needed-for-local",
base_url="http://localhost:9999/v1"
)
response = client.chat.completions.create(
model="foundation",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
)
print(response.choices[0].message.content)import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'not-needed-for-local',
baseURL: 'http://localhost:9999/v1',
});
const completion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Write a haiku about programming' }],
model: 'foundation',
});
console.log(completion.choices[0].message.content);# Basic chat completion
curl -X POST http://localhost:9999/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "foundation",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
}'
# With temperature control
curl -X POST http://localhost:9999/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "foundation",
"messages": [{"role": "user", "content": "Be creative!"}],
"temperature": 0.8
}'# Single prompt mode
afm -s "Explain quantum computing"
# Piped input from other commands
echo "What is the meaning of life?" | afm
cat file.txt | afm
git log --oneline | head -5 | afm
# Custom instructions with pipe
echo "Review this code" | afm -i "You are a senior software engineer"MacLocalAPI/
βββ Package.swift # Swift Package Manager config
βββ Sources/MacLocalAPI/
β βββ main.swift # CLI entry point & ArgumentParser
β βββ Server.swift # Vapor web server configuration
β βββ Controllers/
β β βββ ChatCompletionsController.swift # OpenAI API endpoints
β βββ Models/
β βββ FoundationModelService.swift # Apple Foundation Models wrapper
β βββ OpenAIRequest.swift # Request data models
β βββ OpenAIResponse.swift # Response data models
βββ README.md
OVERVIEW: macOS server that exposes Apple's Foundation Models through
OpenAI-compatible API
Use -w to enable the WebUI, -g to enable API gateway mode (auto-discovers and
proxies to Ollama, LM Studio, Jan, and other local LLM backends).
USAGE: afm <options>
OPTIONS:
-s, --single-prompt <single-prompt>
Run a single prompt without starting the server
-i, --instructions <instructions>
Custom instructions for the AI assistant (default:
You are a helpful assistant)
-v, --verbose Enable verbose logging
--no-streaming Disable streaming responses (streaming is enabled by
default)
-a, --adapter <adapter> Path to a .fmadapter file for LoRA adapter fine-tuning
-p, --port <port> Port to run the server on (default: 9999)
-H, --hostname <hostname>
Hostname to bind server to (default: 127.0.0.1)
-t, --temperature <temperature>
Temperature for response generation (0.0-1.0)
-r, --randomness <randomness>
Sampling mode: 'greedy', 'random',
'random:top-p=<0.0-1.0>', 'random:top-k=<int>', with
optional ':seed=<int>'
-P, --permissive-guardrails
Permissive guardrails for unsafe or inappropriate
responses
-w, --webui Enable webui and open in default browser
-g, --gateway Enable API gateway mode: discover and proxy to local
LLM backends (Ollama, LM Studio, Jan, etc.)
--prewarm <prewarm> Pre-warm the model on server startup for faster first
response (y/n, default: y)
--version Show the version.
-h, --help Show help information.
Note: afm also accepts piped input from other commands, equivalent to using -s
with the piped content as the prompt.
The server respects standard logging environment variables:
LOG_LEVEL- Set logging level (trace, debug, info, notice, warning, error, critical)
- Model Scope: Apple Foundation Model is a 3B parameter model (optimized for on-device performance)
- macOS 26+ Only: Requires the latest macOS with Foundation Models framework
- Apple Intelligence Required: Must be enabled in System Settings
- Token Estimation: Uses word-based approximation for token counting (Foundation model only; proxied backends report real counts)
- Ensure you're running **macOS 26 or later
- Enable Apple Intelligence in System Settings β Apple Intelligence & Siri
- Verify you're on an Apple Silicon Mac
- Restart the application after enabling Apple Intelligence
- Check if the port is already in use:
lsof -i :9999 - Try a different port:
afm -p 8080 - Enable verbose logging:
afm -v
- Ensure you have **Xcode 26 installed
- Update Swift toolchain:
xcode-select --install - Clean and rebuild:
swift package clean && swift build -c release
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
# Clone the repo with submodules
git clone --recursive https://github.com/scouzi1966/maclocal-api.git
cd maclocal-api
# Build WebUI (first time only, requires Node.js)
cd vendor/llama.cpp/tools/server/webui && npm install && npm run build && cd ../../../../..
mkdir -p Resources/webui && cp vendor/llama.cpp/tools/server/public/index.html.gz Resources/webui/
# Build for development
swift build
# Run with verbose logging
./.build/debug/afm -w -g -vThis project is licensed under the MIT License - see the LICENSE file for details.
- Apple for the Foundation Models framework
- The Vapor Swift web framework team
- OpenAI for the API specification standard
- The Swift community for excellent tooling
If you encounter any issues or have questions:
- Check the Troubleshooting section
- Search existing GitHub Issues
- Create a new issue with detailed information about your problem
- Streaming response support
- Function/tool calling implementation
- Multiple model support (API gateway mode)
- Performance optimizations
- Docker containerization (when supported)
- Web UI for testing (llama.cpp WebUI integration)
Made with β€οΈ for the Apple Silicon community
Bringing the power of local AI to your fingertips.