
STOP! THESE MODELS ARE NOT FOR PROMPTING/IMAGE GENERATION
These models are extracted from the base ControlNet models in a slightly different way from the others. They produce different results due to a different extraction method.
These are the models required for the ControlNet extension, converted to Safetensor and "pruned" to extract the ControlNet neural network. I have tested them, and they work.
These models are embedded with the neural network data required to make ControlNet function, they will not produce good images unless they are used with ControlNet.
These models were extracted using the extract_controlnet_diff.py
script, and produce a slightly different result from the models extracted using the extract_controlnet.py
script.
The original version of these models in .pth format can be found here. BUT YOU DO NOT NEED THESE .pth FILES! The files I have uploaded here are direct replacements for these .pth files!
-
control_sd15_canny
-
control_sd15_depth
-
control_sd15_hed
-
control_sd15_scribble
-
control_sd15_normal
-
control_sd15_openpose
-
control_sd15_seg
-
control_sd15_mlsd
Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models
directory.
Note: these models were extracted from the original .pth using the extract_controlnet_diff.py
script contained within the extension Github repo. Kohya-ss has them uploaded to HF here.
描述:
训练词语:
名称: controlnetPreTrained_depthDifferenceV10.safetensors
大小 (KB): 705665
类型: Pruned Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success