This plugin provides real-time lip synchronization for MetaHuman characters by processing audio input to generate visemes.
Features:
-
Simple, intuitive setup
-
Real-time and offline viseme generation
-
Multiple audio input sources (microphone, playback, synthesized speech, custom PCM)
-
Direct integration with MetaHuman’s face animation system
-
Configurable interpolation settings
-
Blueprint-friendly implementation
-
No external dependencies or internet required
-
Cross-platform support (Windows, Mac, Android, MetaQuest)
该插件通过处理音频输入以生成visemes,为元字符提供实时唇形同步。
特征:
-
简单直观的设置
-
实时和离线viseme生成
-
多个音频输入源(麦克风、回放、合成语音、自定义PCM)
-
与MetaHuman的脸部动画系统直接集成
-
可配置的插值设置
-
蓝图友好实施
-
无需外部依赖或internet
-
跨平台支持(Windows,Mac,Android,MetaQuest)
ℹ️ Note: The images with plugin examples and the demo project were created using the Runtime Audio Importer and/or Runtime Text To Speech plugins. So, to follow these examples, you will need to install these plugins as well. However, you can also implement your own audio input solution using Runtime MetaHuman Lip Sync.
🗣️ Bring your MetaHuman characters to life with real-time, cross-platform lip synchronization!
Transform your MetaHuman characters with seamless, real-time lip synchronization that works completely offline and cross-platform! Watch as your digital humans respond naturally to speech input, creating immersive and believable conversations with minimal setup.
Quick links:
-
📌 Custom Development: solutions@georgy.dev (tailored solutions for teams & organizations)
🚀 Key features:
-
Real-time Lip Sync from microphone input
-
Offline Processing – no internet connection required
-
Cross-platform Compatibility: Windows, Mac, Android, and MetaQuest
-
Multiple Audio Sources:
-
Live microphone input (via Runtime Audio Importer’s capturable sound wave)
-
Captured audio playback (via Runtime Audio Importer’s capturable sound wave)
-
Synthesized speech (via Runtime Text To Speech)
-
Custom audio data in float PCM format
-
💡 How it works:
The plugin processes audio input to generate visemes (visual representations of phonemes) that drive your MetaHuman’s facial animations in real-time, creating natural-looking speech movements that match the audio perfectly.
🎮 Perfect for:
-
Interactive NPCs and digital humans
-
Virtual assistants and guides
-
Cutscene dialogue automation
-
Live character performances
-
VR/AR experiences
-
Educational applications
-
Accessibility solutions
🌟 Works great with:
-
Runtime Audio Importer – For microphone capture and audio processing
-
Runtime Text To Speech – For synthesized speech generation
ℹ️ 注: 带有插件示例的图像和演示项目是使用 运行时音频导入器 和/或 运行时文本到语音 插件。 因此,要遵循这些示例,您还需要安装这些插件。 但是,您也可以使用以下方法实现自己的音频输入解决方案 运行时元人唇同步.
🗣️ 通过实时,跨平台的唇形同步,将您的元人角色带入生活!
用无缝的、实时的唇形同步来改变你的元人角色,完全有效 离线状态 和 跨平台! 观看您的数字人类对语音输入的自然反应,以最小的设置创建身临其境和可信的对话。
快速链接:
-
📄 文件编制
-
💬 不和谐支持聊天
-
📌 海关发展: 解决方案@georgy。发展 (为团队和组织量身定制的解决方案)
🚀 主要特点:
-
实时唇同步 从麦克风输入
-
离线处理 -无需互联网连接
-
跨平台兼容性: 窗户, Mac电脑, 机器人,并 [医]MetaQuest
-
多个音频源:
如何运作?:
该插件处理音频输入以生成visemes(音素的视觉表示),实时驱动MetaHuman的面部动画,创建与音频完美匹配的自然外观的语音运动。
完美的:
-
互动Npc和数字人类
-
虚拟助理和指南
-
过场对话自动化
-
现场角色表演
-
VR/AR体验
-
教育应用
-
无障碍解决方案
评论(0)