
This workflow is designed for advanced video processing, incorporating various techniques such as style transfer, motion analysis, depth estimation, and frame interpolation to create a final video that combines elements from different sources with enhanced visual quality.
Group Bypass toggle lets you turn off what you don't need.
You will need to correct the controlnet models, I made an error while rearranging lol. Wanted to make sure I put it someplace I wouldn't forget it since its in a mostly useable state.
1.5 has some corrections with models and labels.
2.0 is the final I'm releasing. Tweaked it to create the output I need the way I need it. This workflow was worked on to change the style of 3D rendered vids and other videos. Made as efficiently and robust as I could make it for SD 1.5 and AnimateDiff 1.5. Customize how you want, if you add a group box, it gets added to the bypass node. Makes it easy to switch things on and off. Enjoy!
2.5 Added audio input and some other improvements. Don't take sliders above 1.0 for IP strength or IP Image Strength, moved them to the input group since I use them so often.
3.0 Significantly Improved vid2vid, kept audio but ditched audio to mask as I could not get it to function as I wanted. It technically works, but... Not the way I need. This is the final, no more versions lol.
4.0 Okay, the IP Adapter update broke my stuff lol, but that ended up being an awesome thing. With the updates to the IP Adapter Plus, and a greater understanding of how it works due to reading the update lol... I managed to reduce it to 2 controlnets and 2 IP Adapters. Tweak the weights and other settings (I recommend reading the IP Adapter update, they made a lot of changes), ad a video, enter your prompts, set your loras, upload an image for a face swap, etc. Turn off what you don't need with ease. The most simplified and easy to use I can muster lol. This is the final, final final, final lol.
Controlnets set up in Workflow: Depth, Openpose, Canny, Softedge,
-
VHS_LoadVideo: Loads the video file into the workflow, extracting frames and audio for processing.
-
ImageScale: Scales images to the desired resolution, preparing them for processing in subsequent nodes.
-
ControlNetLoader: Loads the ControlNet models which are used to extract pose and motion information from the video frames.
-
DepthAnythingPreprocessor & DWPreprocessor: These nodes preprocess the images for depth estimation and pose detection, respectively, enhancing the quality of motion and depth data for the animation process.
-
Control Net Stacker: Combines multiple control nets to create a comprehensive motion and pose data set for the video.
-
LoadImage: Loads static images into the workflow, which can be used for texture mapping, background replacement, or as reference images in style transfer processes.
-
PrepImageForClipVision: Prepares images for processing with CLIP, adjusting them to the right format and resolution.
-
CLIPVisionLoader: Loads the CLIP model which is used for semantic image understanding and can guide style transfer processes.
-
IPAdapterModelLoader: Loads the IP Adapter model, used for style transfer, allowing the adaptation of images to match a certain aesthetic or thematic style.
-
IPAdapterApplyEncoded: Applies the IP Adapter model to encode the visual style from one image onto the processed video frames, facilitating style transfer.
-
IPAdapterEncoder: Encodes images using the IP Adapter model, preparing them for the style transfer process.
-
ADE_LoadAnimateDiffModel & ADE_ApplyAnimateDiffModel: These nodes load and apply the AnimateDiff model, which is used for creating smooth transitions and animations between video frames.
-
ADE_AnimateDiffLoRALoader: Loads LoRA models that can be used with AnimateDiff to enhance motion detail and fluidity in the animations.
-
ADE_UseEvolvedSampling: Utilizes evolved sampling techniques to improve the quality of generated animations and transitions.
-
BatchPromptSchedule: Manages the scheduling of text prompts for guiding the generation and transformation process, enhancing the contextual relevance of the generated content.
-
ReActorFaceSwap & FaceRestoreCFWithModel: These nodes are involved in face processing, where ReActorFaceSwap swaps faces between subjects, and FaceRestoreCFWithModel restores or enhances facial details in the video.
-
FILM VFI: Applies frame interpolation to create smoother motion in the video.
-
VHS_VideoCombine: Combines the processed frames back into a video format, adding audio and finalizing the video output.
-
KSampler Adv. (Efficient): An advanced sampler node that uses efficient methods to process and refine the video's visual quality.
-
Efficient Loader: Loads models and data efficiently for processing, optimizing the workflow's performance.
-
CR Apply Multi-ControlNet: Applies multiple control nets to enhance motion analysis and synthesis in the video processing pipeline.
描述:
Significantly improved vid2vid with some trickery lol.
训练词语:
名称: geekyGhostVid2vid_v30.zip
大小 (KB): 8
类型: Archive
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success