
This workflow generates an image with SD1.5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. It animates 16 frames and uses the looping context options to make a video that loops. The denoise on the video generation KSampler is at 0.8 so that some of the structure of the original image generated is retained.
Based on: https://comfyworkflows.com/workflows/14d8d52b-1297-4d58-8469-2bdab4f5ebf1
Models needed:
1) Your favourite SD1.5 checkpoint
2) AnimateLCM and LORA (https://huggingface.co/wangfuyun/AnimateLCM): Put the Lora in models/loras and the model in models/animatediff_models
3) SAM Model (Manager > Install Models, search for SAM): sam_vit_b_01ec64.pth
4) Grounding Dino Model (Should be automatically downloaded)
5) Your favourite upscale model (Manager > Install Models, search for upscale): 4x_NMKD-Siax_200k.pth
Instructions
1) The cyan nodes are the models needed for the workflow
2) The blue nodes are the user inputs that are required. There are 3 prompts that are required for this workflow. a) Image generation prompt: Describe what you want in the overall image here, e.g. "Mountains, River, Clouds" b) GroundingDino prompt: Describe the object within the image that you want animated, e.g. "River" and c) the video prompt: Describe what you want the animated object to do, e.g. "Flowing River"
3) For those who are having insufficient VRAM, you can disable the "Upscale Image (Using Model)" node. This will reduce the input resolution to the KSampler to 256 x 256 x 16 frames. If this works, you can bypass the "Upscale Image By" node as well, to increase the resolution to 512 x 512 x 16 frames. The original workflow, without any nodes bypassed will result in a 1024 x 1024 x 16 frames video. Thanks to mojoflojo who helped with testing on GPU with lower VRAM.
描述:
训练词语:
名称: partiallyAnimateAn_v10.zip
大小 (KB): 5
类型: Archive
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success