
This controlnet is specifically finetuned for animatediff whose purpose is to keep the initial image appearance in the process of animation. It performance is unstable when compared to the tile controlnet, because the train dataset is less than tile controlnet. When you use it for animation, I recommend you use the less motion version for its stable performance.
For img2video:https://github.com/crystallee-ai/controlGIF could be a reference for how to use these checkpoints. For vid2vid: you could see this https://discord.com/channels/1076117621407223829/1149372684220768367/threads/1192162917395730635, which is a creative and effective workflow.
crishhh on huggingface
A mirror for the animatediff controlnet models by crishhh on huggingface, original repository can be found here. As of now only SD1.5 is supported, but crishhh is working on an SDXL model. I can't explain how it works exactly and what settings to use for best results, as there is zero documentation sadly. Use the force of Trial and Error, as usual :)
描述:
initial version
训练词语:
名称: animatediffControlnet_sd15MoreMotionFP16.ckpt
大小 (KB): 2496126
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success
名称: animatediffControlnet_sd15MoreMotionFP16.safetensors
大小 (KB): 2495844
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success