Prompting Guide with LTX-2.3
Migrated from LTX-2.3 notes: 14 extracted reference links.
Original discussion: https://www.reddit.com/r/StableDiffusion/comments/1rnij3k/prompting_guide_with_ltx23/
Referenced links:
- https://huggingface.co/Comfy-Org/ltx-2/resolve/main/split\_files/text\_encoders/gemma\_3\_12B\_it\_fp4\_mixed.safetensors — …
- https://huggingface.co/Comfy-Org/ltx-2/resolve/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors — https://huggingface.co/Comfy-Org/ltx-2/resolve/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors…
- https://huggingface.co/Kijai/LTX2.3\_comfy/blob/main/text\_encoders/ltx-2.3\_text\_projection\_bf16.safetensors — [https://huggingface.co/Comfy-Org/ltx-2/resolve/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors](htt…
- https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/text_encoders/ltx-2.3_text_projection_bf16.safetensors — [https://huggingface.co/Comfy-Org/ltx-2/resolve/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors](htt…
- https://www.reddit.com/r/StableDiffusion/comments/1rnij3k/comment/o97kots/ — See my other comment here if you want to run GGUF quants of the text encoders to save space and/or RAM. A lot of the…
- https://civitai.com/models/2445970/ltx23-fp4 — There are FP8 models which are almost half that size…
- https://huggingface.co/unsloth/LTX-2.3-GGUF/tree/main — There are FP8 models which are almost half that size…
- https://civitai.com/models — There are FP8 models which are almost half that size…
- https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main — There are FP8 models which are almost half that size…
- https://huggingface.co/Comfy-Org/ltx-2/blob/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors — There are FP8 models which are almost half that size…
- https://huggingface.co/unsloth/gemma-3-12b-it-GGUF — There are FP8 models which are almost half that size…
- https://pytorch.org/get-started/locally/ — No, that’s not your build. This is PyTorch: Current version of PyTorch is 2.10.0 - you can’t have a build with PyTorch…
- https://www.python.org/ — No, that’s not your build. This is PyTorch: https://pytorch.org/get-started/locally/ Current version of PyTorch is 2.10.0 -…
- https://github.com/Comfy-Org/comfy-kitchen — No, that’s not your build. This is PyTorch: https://pytorch.org/get-started/locally/ Current version of PyTorch is 2.10.0 -…