TAC-Twin is an end-to-end framework for rapidly creating interactive, personalized avatars for scalable training and simulation applications. Built as an extension of the Virtual Human Toolkit and powered by RIDE, the framework integrates technologies for audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation, and high-fidelity text-to-speech synthesis.
A personalized avatar is a real-time, embodied digital representation of an actual individual rather than a generic character. The creation pipeline requires only a single facial photograph, processed through a photorealistic character generation workflow, then refined, customized, and deployed in a real-time 3D environment for integration with conversational AI and synthetic voice generation. The system also supports rapid generation of generic avatars from high-quality synthetic headshots produced by generative AI, enabling the creation of diverse, realistic or stylized cohorts within minutes.
The standalone Unity release facilitates an initial use case examining whether personalized avatars enhance engagement, motivation, and performance compared to generic avatars.
Documentation for this stand-alone Unity release can be found at this GitHub's Wiki section. Detailed information for how to create interacive embodied conversational agents is part of the VHToolkit's Wiki.
TAC-Twin is licensed under the USC-RL v3.0 license, a permissive license for academic and personal use. For commercial and government purpose use, please contact us.
When publishing work that uses TAC-Twin, please cite the following paper:
@inproceedings{hartholt2025tac-twin,
title={TAC-Twin: A Rapid Framework for Personalized Doppelgänger Avatar Creation Using a Modular Virtual Human Pipeline},
author={Hartholt, Arno and Kim, Kevin and Leeds, Andrew and Sookiassian, Edward and Fast, Ed},
booktitle={Proceedings of 7th International Conference in Human Systems Engineering and Design (IHSED 2025)},
year={2025},
organization={AHFE International, USA}
}