Bring multiple “crew” AIs into your Foundry world. Define up to 8 distinct AI identities, each with a trigger name, speaking actor, personality prompt, knowledge notes, and per-user audio access. Optional ElevenLabs TTS turns replies into voice and NOVA routes audio only to the right listeners via SocketLib.
Perfect for ship AIs (sci‑fi), familiars or patrons/gods (fantasy), or any talking NPC that should feel alive.
- Features
- Requirements
- Installation
- Quick Start
- Configuration
- How to Use
- Macro Examples
- Adding AI Portraits
- Tips & Best Practices
- Troubleshooting
- Credits
- License
- Links
-
Up to 8 AIs, each with:
- Name / Trigger (e.g.,
Nova,Robotus,Oracle) - Actor to speak as (chat name & portrait integration)
- Personality Prompt (large field)
- Knowledge Notes (large field)
- Per-user Access list (who may hear TTS) or
ALL - Per-AI ElevenLabs Voice ID (optional)
- Name / Trigger (e.g.,
-
Audio routing via SocketLib
- TTS is played only for authorized listeners
- Test utilities: /novabeep, /novatest, /novaself
-
Whisper-aware
- If a user whispers to an AI, the AI replies privately to the same recipients
- TTS only plays for recipients who are both in the whisper and allowed by the AI
-
GM Quality-of-life
- GM Preview Voices: GM can monitor all TTS if desired
- Optional Chat Portrait support for beautiful chat bubbles
- Foundry VTT v13
- OpenAi(ChatGPT) account for API
- SocketLib (required for multi-client audio)
- Optional: Chat Portrait
- Optional: ElevenLabs account for TTS
Manifest URL (Foundry → Add-on Modules → Install Module):
https://raw.githubusercontent.com/BdrGM/nova-multiai/main/module.json
Or download the ZIP from Releases.
- Enable SocketLib (required) and Chat Portrait (optional).
- Enable NOVA Multi-AI.
- Open Game Settings → Module Settings → NOVA Multi-AI.
- Toggle on an AI Slot and set:
- Name/Trigger, Actor to speak as
- Personality Prompt + Knowledge Notes
- Access (comma-separated Foundry user names or
ALL) - (Optional) ElevenLabs Voice ID
- (Optional) Paste your ElevenLabs API Key.
- In chat, type:
YouAiNAme, hello there.
Authorized listeners will hear the voice.
| Setting | Description |
|---|---|
| Enabled | Turn the slot on/off |
| Name / Trigger | Word that activates the AI (e.g., Nova) |
| Actor to Speak As | Controls chat name & portrait |
| Personality Prompt | Behavior, tone, limits |
| Knowledge Notes | World/ship/quest facts |
| Access (user names or ALL) | Comma-separated Foundry user names or ALL |
| Voice ID | ElevenLabs voice for this AI |
Tip: Put rules of engagement in the Prompt and setting facts in Knowledge Notes.
- TTS is delivered via SocketLib to the exact Access list.
ALLallows everyone (GMs can still opt out).- Whispers further restrict both the text and audio audience.
- GM: Preview AI voices — when on, GM hears all AI TTS regardless of Access.
Great for testing and streaming.
- ElevenLabs → Profile → API Keys → create/copy key.
- In Foundry, paste the key into NOVA settings.
- ElevenLabs → Voices → copy a Voice ID.
- Paste that Voice ID into each AI slot you want voiced.
API keys live in world settings, not in the repo.
Just address the AI by its trigger:
Nova, calculate jump fuel for 2 parsecs.
Whisper to an AI like any user:
/w Nova plot a quiet route to 61 Cygni.
- NOVA detects it was whispered and responds privately to the same recipients.
- TTS is sent only to those recipients who also have Access.
| Command | What it does |
|---|---|
| /novabeep | Plays a short beep to the users who would hear TTS (routing test) |
| /novatest | Sends a short ElevenLabs line to confirm TTS |
| /novaself | Confirms that your client receives routed audio |
new Dialog({
title: "Talk to Nova",
content: `<p>What do you say to Nova?</p><input type="text" id="line" style="width:100%">`,
buttons: {
go: {
label: "Send",
callback: (html) => {
const line = html.find("#line").val()?.trim();
if (!line) return;
ChatMessage.create({ content: `Nova, ${line}` });
}
}
}
}).render(true);new Dialog({
title: "Whisper Nova",
content: `<p>Whisper to Nova</p><input type="text" id="line" style="width:100%">`,
buttons: {
go: {
label: "Send",
callback: (html) => {
const line = html.find("#line").val()?.trim();
if (!line) return;
ChatMessage.create({
content: `/w Nova ${line}`,
whisper: [game.user.id]
});
}
}
}
}).render(true);Duplicate these to impersonate other AIs (change Nova to another trigger).
To display a portrait for each AI in chat:
- In Foundry, create a Character Actor with the same name as your AI’s trigger (e.g.,
Nova). - Set the actor image/portrait to whatever picture you want for that AI.
- In the NOVA Multi-AI settings, select this Actor in the “Actor to Speak As” dropdown.
When the AI speaks, Foundry will use the chosen actor’s name and portrait in the chat log.
For best results, install the optional Chat Portrait module to get styled chat bubbles with the AI’s image.
NOVA can rewrite AI chat into fantasy languages and glyphs using the Polyglot module, and also send that stylized text to TTS so speech “sounds” like the chosen language.
- Install and enable Polyglot.
- Install and enable SocketLib (already required by NOVA).
- Open Settings → Module Settings → NOVA Multi-AI.
- Enable Polyglot integration (fantasy language mode).
- Choose a Fantasy Language from the dropdown (e.g., Aklo, Draconic, Skald, etc.).
- (Optional) Set Per-Actor/Per-Voice choices if you use multiple AIs.
- Save settings.
- GM view: Chat is shown in glyphs (per Polyglot rules); GM still sees/controls translation as usual.
- Players: See glyphs or translated text based on their Polyglot settings/known languages.
- TTS pipeline: NOVA transforms the text into its fantasy language variant first, then sends that to your TTS provider so the audio matches the chosen language style.
- If Polyglot is disabled or the language is set to Common/Taldane, NOVA falls back to plain text + normal voice.
- This feature has been lightly tested on PF2e so far.
- If the language dropdown doesn’t appear, make sure Polyglot is enabled and reload the game once.
- If switching the Foundry UI language makes the wrong dropdown appear, reopen Module Settings and reselect your fantasy language.
- If TTS sounds “plain,” verify Polyglot integration is enabled and a non-Common language is selected.
- Keep Prompts focused; put world facts in Knowledge Notes.
- Access lists must use Foundry user names, not character names.
- Use GM Preview when rehearsing scenes; disable for live games if you don’t want to hear everything.
- Give each AI a distinct voice for instant flavor.
No one hears TTS
- Ensure SocketLib is enabled.
- Check Access (user names or
ALL). - Run /novabeep to verify routing without TTS.
- Make sure browsers have unlocked audio (click once/play any sound).
GM hears TTS but players don’t
- Players may not be on the AI’s Access list.
- Players can run /novaself to test their client.
Private whisper wasn’t private
- Confirm you actually sent a /w to the AI.
- NOVA replies to the same recipients, and TTS is limited to those recipients with Access.
Portrait/name missing
- Install Chat Portrait and set Actor to Speak As.
-
Author: BdGM
-
Foundry VTT by Atropos
-
SocketLib by Farling (required)
-
OpenAI — text generation via the OpenAI API (Chat Completions). This project is not affiliated with or endorsed by OpenAI.
-
Chat Portrait by p4535992 (recommended)
-
Fantasy-language glyphs & comprehension are powered by Polyglot (by @mclemente)(recommended)
-
ElevenLabs for TTS (optional) This project is not affiliated with or endorsed by ElevenLabs.
MIT © BdGM
- Repo: https://github.com/BdrGM/nova-multiai
- Manifest: https://raw.githubusercontent.com/BdrGM/nova-multiai/main/module.json
- Releases: https://github.com/BdrGM/nova-multiai/releases
