This plugin provides real-time lip synchronization for MetaHuman characters by processing audio input to generate visemes.

Features:

  • Simple, intuitive setup

  • Real-time and offline viseme generation

  • Multiple audio input sources (microphone, playback, synthesized speech, custom PCM, Pixel Streaming)

  • Direct integration with MetaHuman’s face animation system

  • Configurable interpolation settings

  • Blueprint-friendly implementation

  • No external dependencies or internet required

  • Cross-platform support (Windows, Android, Meta Quest)

该插件通过处理音频输入以生成visemes,为元字符提供实时唇形同步。

特征:

  • 简单直观的设置

  • 实时和离线viseme生成

  • 多个音频输入源(麦克风、播放、合成语音、自定义PCM、像素流)

  • 与MetaHuman的脸部动画系统直接集成

  • 可配置的插值设置

  • 蓝图友好实施

  • 无需外部依赖或internet

  • 跨平台支持(Windows,Android,Meta Quest)

ℹ️ Note: The images with plugin examples and the demo project were created using the Runtime Audio Importer and/or Runtime Text To Speech plugins. So, to follow these examples, you will need to install these plugins as well. However, you can also implement your own audio input solution without using them.

🗣️ Bring your MetaHuman and custom characters to life with zero-latency, real-time lip (+ laughter) synchronization!

Transform your digital characters with seamless, real-time lip synchronization that works completely offline and cross-platform! Watch as your characters respond naturally to speech input, creating immersive and believable conversations with minimal setup.

Quick links:

🚀 Key features:

💡 How it works:

The plugin processes audio input to generate visemes (visual representations of phonemes) that drive your MetaHuman’s facial animations in real-time, creating natural-looking speech movements that match the audio perfectly. It also detects laughter patterns in the audio to trigger dynamic laughing animations, adding another dimension of realism to your characters.

🎮 Perfect for:

  • Interactive NPCs and digital humans

  • Virtual assistants and guides

  • Cutscene dialogue automation

  • Live character performances

  • VR/AR experiences

  • Educational applications

  • Accessibility solutions

🌟 Works great with:

ℹ️ 注: 带有插件示例的图像和演示项目是使用 运行时音频导入器 和/或 运行时文本到语音 插件。 因此,要遵循这些示例,您还需要安装这些插件。 但是,您也可以在不使用它们的情况下实现自己的音频输入解决方案。

⭐️将您的元人和自定义角色带入生活,零延迟,实时唇(+笑声)同步!

转换您的数字字符与无缝,实时唇同步,完全工作 离线状态跨平台! 观看您的角色对语音输入的自然响应,以最少的设置创建身临其境和可信的对话。

快速链接:

🚀 主要特点:

如何运作?:

该插件处理音频输入以生成visemes(音素的视觉表示),实时驱动MetaHuman的面部动画,创建与音频完美匹配的自然外观的语音运动。 它还检测音频中的笑声模式,以触发动态的笑声动画,为您的角色添加另一个维度的现实主义。

完美的:

  • 互动Npc和数字人类

  • 虚拟助理和指南

  • 过场对话自动化

  • 现场角色表演

  • VR/AR体验

  • 教育应用

  • 无障碍解决方案

我的工作很好:

声明:本站所有资源都是由站长从网络上收集而来,如若本站内容侵犯了原著者的合法权益,可联系站长删除。