[Question] How to achieve Lip-Synced Vid2Vid with LTX 2.3 (Native Audio) in ComfyUI?
Workflow request for converting silent source video into lip-synced talking output in ComfyUI using LTX 2.3 native-audio capabilities.
The post targets a high-value production use case: preserving original visual identity in vid2vid while adding synchronized speech through LTX 2.3 nodes and ComfyUI graph design.
Reference links