NF4+LoRA, FP8 to NF4 + LoRA For ComfyUI (Workflow) - For 8GB VRAM and less版本v1.0 (ID: 956564)

NF4+LoRA, FP8 to NF4 + LoRA For ComfyUI (Workflow) - For 8GB VRAM and less版本v1.0 (ID: 956564)

For custom_nodes - https://github.com/bananasss00/ComfyUI_bitsandbytes_NF4-Lora

Direct link https://github.com/bananasss00/ComfyUI_bitsandbytes_NF4-Lora/archive/refs/heads/master.zip

Key Features of the Nodes

  1. On-the-fly Conversion: The nodes allow conversion of FP8 models into NF4 format in real time. Generation speed does not drop with LoRA use.

  2. Model Loading Optimizations: Improved model loading times by allowing users to specify the data type (load_dtype) of the model, avoiding unnecessary re-conversions.

  3. Post-Generation Model Unloading Fixes: Ensures proper model partially unloading after generation, addressing previous issues that could affect memory management.

Please Support if you like this WF: Buzz /Likes /Add post or Review /Follow - Thanks!

Comparison of NF4-FP8-GGUF_Q4_0 generations with two LoRA: HyperFlux+realism_lora at 8 steps:

https://imgsli.com/MzA0Nzgz https://imgsli.com/MzA0Nzg0 https://imgsli.com/MzA0Nzg2 https://imgsli.com/MzA0Nzg3

Tips for Best Quality

  1. Use FP8 models: Although NF4 models are supported, the quality of applied LoRA is significantly higher if FP8 models are used.

  2. Adjust the LoRA weight for NF4 models: When using NF4 models as inputs, you may need to increase the LoRA weight, otherwise the LoRA effect may not be noticeable. Also, in the Advanced Nodes section, try setting the rounding_format parameter to a preset of 2,1,7. Keep in mind, however, that using these settings may cause artifacts - experimenting with custom values may yield better results. Solutions I don't yet know how to effectively apply LoRA to nf4 models.

The node uses a modified rounding function from ComfyUI, which can be found https://github.com/comfyanonymous/ComfyUI/blob/203942c8b29dfbf59a7976dcee29e8ab44a1b32d/comfy/float.py#L14.

When the preset is set to 2, 1, 7, or custom, these three values determine the EXPONENT_BITS, MANTISSA_BITS, and EXPONENT_BIAS within the function, which control the precision of floating-point calculations.

Feel free to check out the project and contribute at the GitHub repository.

https://github.com/bananasss00/ComfyUI_bitsandbytes_NF4-Lora

Author: https://civitai.com/user/egordubrovskiy9112843

描述:

训练词语:

名称: nf4LoraFP8ToNF4LoraForComfyui_v10.zip

大小 (KB): 31

类型: Archive

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

NF4+LoRA, FP8 to NF4 + LoRA For ComfyUI (Workflow) - For 8GB VRAM and less

NF4+LoRA, FP8 to NF4 + LoRA For ComfyUI (Workflow) - For 8GB VRAM and less

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/1154336.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录