Generative AI Newsroom

[Question] How to achieve Lip-Synced Vid2Vid with LTX 2.3 (Native Audio) in ComfyUI?

LTX-2.3 r/StableDiffusion · Apr 9, 2026, 12:00 a.m.

Workflow request for converting silent source video into lip-synced talking output in ComfyUI using LTX 2.3 native-audio capabilities.

The post targets a high-value production use case: preserving original visual identity in vid2vid while adding synchronized speech through LTX 2.3 nodes and ComfyUI graph design.

Reference links

Read original source ↗