
Animatediff workflow i made using AlignYourSteps schedulers, it's still a bit experimental. Should be wayyyyy faster than normal and can be used with normal checkpoints.
txt2vid, img2vid, vid2vid, ipadapter, cnets for pose, depth, softedge, canny.
Not a beginner friendly workflow!
Preparations:
-
Read the yellow note-nodes
-
Download all required models and loras
-
reselect all models, loras, VAE etc to make sure they are all correctly selected (use whatever VAE you want)
-
Note that the NEGATIVE prompt box is minimized by default under the POSITIVE one
-
On the far right, make sure to check the whole save node cluster and follow instructions
-
get a motion lora if you want to use one
-
Check all the switches and what they do
Recommended checkpoints: AstrAnime_V6, Photon (both behave well with animatediff), but use whatever.
https://github.com/guoyww/AnimateDiff has links to download v3_adapter_sd_v15.ckpt and v3_sd15_mm.ckpt.ckpt
https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file has links to download the IPAdater models needed
https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main has the control_v11f1e_sd15 models for controlnet
描述:
训练词语:
名称: astraaliBlueblackAYS_v12.zip
大小 (KB): 11
类型: Archive
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success