Nyx Web Chat widget enables full-duplex voice interaction. The user speaks, interrupts, and receives visual feedback in real-time, with the avatar powered by our custom audio-to-motion engine.
Get up and running in minutes. Deploy the server and embed the widget on any site.
The server handles audio processing and agent communication.
# 1. Clone & Setup git clone https://github.com/myned-ai/avatar-chat-server.git cd avatar-chat-server uv sync # 2. Configure Environment Create environmental variables as seen on .env.example # Required: Set OPENAI_API_KEY or GEMINI_API_KEY # Optional: Set AUTH_SECRET_KEY for secured access # 3. Add Knowledge (Optional) # Create your website to knowledge base text # Set KNOWLEDGE_BASE_SOURCE env variable to the location of your knowledge base text to ground the agent # Can be a local file path (e.g. "data/knowledge.md") or a URL # 4. Run Server uv run python src/main.py # Use the docker setup if you prefer containerization docker-compose up -d
Update your environmental variables configuration:
AUTH_ENABLED=trueopenssl rand -hex 32AUTH_SECRET_KEY=...AUTH_ALLOWED_ORIGINS=...Add the widget to any HTML page using the CDN link.
<!-- Container -->
<div id="avatar-chat"></div>
<!-- CDN Script -->
<script src="https://cdn.jsdelivr.net/npm/@myned-ai/avatar-chat-widget"></script>
<script>
AvatarChat.init({
container: '#avatar-chat',
serverUrl: 'wss://your-server.com/ws',
authEnabled: true,
position: 'bottom-right'
});
</script>
Authenticated connection flow:
AUTH_SECRET_KEYtoken paramwss://.../ws?token=...Fill in the required fields (e.g. API keys) and keep the defaults for everything else — you're good to go.
All deployment scripts are available in the server repo.
* Azure is set to Always On but still need some warmup.* Minimal instances need to be set manual on GCP for Always On.
*On AWS, choose "Upload a template file" and upload from here.
Designed for natural human connection, built on robust open-source research and cutting edge infrastructure.
True full-duplex communication. Users can speak while the avatar is speaking. The system handles Accurate Interruption (VAD) to stop the avatar instantly when the user cuts in, just like a real conversation.
Beyond simple lip-sync. We use Blendshapes to drive natural facial movements—eyebrows, blinks, and smiles—that match the emotional tone of the voice response.
The "Audio to Expression" model is heavily quantized. It runs efficiently on standard CPUs, meaning you don't need expensive GPU instances to host the avatar server.
Includes a built-in UI for real-time subtitle sync. Users can read along with the conversation, ensuring accessibility and clarity in noisy environments.
Out-of-the-box support for custom knowledge. Give the avatar your product manuals, FAQs, or other documents, and it will provide answers based on your specific content.
Secured via HMAC & Token-based authentication. The widget requires a signed token from your backend to initiate a WebSocket connection, preventing unauthorized usage of your LLM credits.
A lightweight, framework-agnostic JavaScript bundle that embeds into any website.
A Python-based WebSocket server that acts as the central brain and security layer.
A highly optimized inference engine enabling real-time facial animation from audio input.