
This ComfyUI Workflow allows Audio Reactivity for AI animation in an EASY way ?
DOCS, WF, EXPLANATIONS : https://github.com/yvann-ba/ComfyUI_Yvann-Nodes
Please Give a ⭐ on GitHub
it really helps pushing forward the pack and share workflows
-
This Workflow lets you sync multiple image inputs with your audio, making your animations come alive by switching between images based on beats (like bass, drums, vocals...) with smooooth transitions (or sharp transitions if you're a techno guy)
It uses an audio implementation of IPAdapter to smoothly blend styles based on your audio-reactive images and includes ControlNet to help shape your animation based on your cool input video :Nice:
The WF is based on Stable Diffusion 1.5 and HyperSD (8 steps), it’s designed to create high-quality animations efficiently, even on low VRAM/GPU setups (like 6gb VRAM )
描述:
训练词语:
名称: audioreactiveVid2vid_v10.zip
大小 (KB): 340
类型: Archive
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success