A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Hunyuan Video, LTX Video and Flux.
-
Updated
Aug 14, 2025 - Python
A fast AI Video Generator for the GPU Poor. Supports Wan 2.1/2.2, Hunyuan Video, LTX Video and Flux.
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers" (ICML 2025)
Radial Attention Official Implementation
Light Video Generation Inference Framework
[ICML2025] Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching
Text Encoders finally matter 🤖🎥 - scale CLIP & LLM influence! + a Nerdy Transformer Shuffle node
Gradio UI for training video models using finetrainers
ComfyUI-HunyuanVideo-Avatar is now available in ComfyUI, HunyuanVideo-Avatar is a multimodal diffusion transformer (MM-DiT)-based model capable of simultaneously generating dynamic, emotion-controllable, and multi-character dialogue videos.
ComfyUI-HunyuanPortrait is now available in ComfyUI, HunyuanPortrait is a diffusion-based condition control method that employs implicit representations for highly controllable and lifelike portrait animation.
Custom tool set mostly for Hunyuan Video, but includes some WAN Video nodes.
Benchmarking training throughput of contemporary diffusion models on AMD's GPU hardware.
AI Text-to-Video Generation Example using Hunyuan Model
Add a description, image, and links to the hunyuan-video topic page so that developers can more easily learn about it.
To associate your repository with the hunyuan-video topic, visit your repo's landing page and select "manage topics."