Features:

  • subtitles aren’t needed;
  • completely offline;
  • can generate lip-sync in runtime for loaded/TTS audio;
  • can generate Anim Sequence assets with lip-sync in curves;
  • lip-sync for animation curves (universal) or morph targets (when possible);
  • asynchronous audio recognition and building lip-sync;
  • (beta!) create lip-sync in runtime on PC and Android using remote server;
  • additional feature: recognize microphone input (Speech-to-text) in runtime (Windows only).

Code Modules:

  • YnnkVoiceLipsync (Runtime)
  • YnnkVoiceLipsyncUncooked (UncookedOnly)

Number of Blueprints: 0

Number of C++ Classes: 8

Network Replicated: No

Supported Development Platforms: Windows x64

Supported Target Build Platforms: Windows x64, Android

Documentation: [Doc]

Example Project: 5.3 | 5.2 | MetaHuman 5.1 with Enhancer Plugin

Executable Demo: [ZIP]

Enhancer Plugin: [5.1 – 5.3]

特征:

  • 不需要字幕;
  • 完全离线;
  • 可以在运行时为加载/TTS音频生成唇同步;
  • 可以在曲线中使用唇形同步生成动画序列资产;
  • 动画曲线(通用)或变形目标(如果可能)的唇形同步;
  • 异步音频识别和建立唇同步;
  • (beta!)使用远程服务器在PC和Android上的运行时创建唇同步;
  • 附加功能:在运行时识别麦克风输入(语音到文本)(仅限Windows)。

代码模块:

  • YnnkVoiceLipsync(运行时)
  • YnnkVoiceLipsyncUncooked(UncookedOnly)

蓝图数目:0

C++类数:8

网络复制:没有

支持的开发平台:Windows x64

支持的目标构建平台:Windows x64,Android

文件:[医生]

示例项目: 5.3 | 5.2 | 元人类5.1 与增强插件

可执行演示:[拉链]

Enhancer插件:[5.1 – 5.3]










This plugin uses voice recognition engine to generate lip-sync animation from SoundWave assets or PCM audio data. Animation is saved as curves in data assets and can be played in runtime together with audio. This approach allows to achieve well-looking lip-sync animation easily without subtitles.

Additional feature: recognize input from microphone (speech-to-text) in runtime.

Video #1

Video #2: Lip-sync Curves to Anim Seqiences

Video #3: Generate Pose Asset with visemes (CC3)

Video #4: MetaHuman Setup (tutorial)

Video #5: Pose asset for MetaHuman from default visemes

New tutorial for MetaHuman.

New tutorial for CC3/CC4.

Feb. 24 update note: New language setup and packaging pipeline

Unlike text-to-lipsync solution, this is true voice-to-lipsync plugin. You don’t need subtitles to get lips animatied, and resulted animation is much closer to speech then in case of subtitles-based solution.

Lip-sync can be generated in runtime, but not in real-time. I. e. it doesn’t work with microphone or other streamed audio.

Fully supported languages: English, Chinese. Also supported: Russian, Italian, German, French, Spanish, Portuguese, Polish.

Whisper add-on (to use Whisper instead of Vosk): YnnkWhisperRecognizer

该插件使用语音识别引擎从声波资产或PCM音频数据生成唇形同步动画。 动画在数据资源中保存为曲线,并可在运行时与音频一起播放。 这种方法可以在没有字幕的情况下轻松实现好看的唇形同步动画。

附加功能:在运行时识别来自麦克风(语音到文本)的输入。

视频#1

视频#2:嘴唇同步曲线到Anim Seqiences

视频#3:使用visemes生成姿势资源(CC3)

视频#4:MetaHuman设置(教程)

视频#5:从默认visemes为MetaHuman配置资产

新教程 对于MetaHuman.

新教程 对于CC3/CC4.

二月。 24更新说明: 新的语言设置和打包管道

与text-to-lipsync解决方案不同,这是真正的voice-to-lipsync插件。 你不需要字幕来使嘴唇动画化,结果动画更接近语音,然后在基于字幕的解决方案的情况下。

唇形同步可以在运行时生成,但不能实时生成。 即它不适用于麦克风或其他流式音频。

完全支持的语言:英语,中文。 还支持:俄语,意大利语,德语,法语,西班牙语,葡萄牙语,波兰语。

耳语附加(使用耳语而不是Vosk): [医]识别器

声明:本站所有资源都是由站长从网络上收集而来,如若本站内容侵犯了原著者的合法权益,可联系站长删除。