Flux Heavy 17B版本v1.0 q4_0 (ID: 1079329)

Flux Heavy 17B版本v1.0 q4_0 (ID: 1079329)

I am not the original author of this model. This is a repost from HuggingFace project page here:

https://huggingface.co/city96/Flux.1-Heavy-17B

About

This is a 17B self-merge of the original 12B parameter Flux.1-dev model.

Merging was done similarly to 70B->120B LLM merges, with the layers repeated and interwoven in groups.

Final model stats:
p layers: [ 32]
s layers: [ 44]
n params: [17.17B]

ComfyUI

Just load it normally via the "Load Diffusion Model" node and placed in the "diffusion_models" folder.
For the original 35GB file you need like 80GBs of system RAM on windows for it to not swap to disk lol. It requires about 35-40GBs of VRAM for inference, assuming you offload the text encoder and unload it during VAE decoding. Partial offloading works if you have enough system RAM.

Settings? LoRA compatibility?

Just use the same settings you'd use for regular flux. LoRAs do seem to have at least some effect, but the blocks don't line up so don't expect them to work amazingly.

Important Notes From Uploader (me)

  • This rendered a 512x512px image, 30 steps, takes my 8GB VRAM GPU 80 seconds to render.

  • I took the 35GB file and quantized it down to q_4 creating the 9.6GB file.

  • I will also be publishing q_2 and q_8 versions to compare.

描述:

This must go into the diffusion_models folder, you must use the Load Diffusion Models node in ComfyUI

训练词语:

名称: fluxHeavy17B_v10Q40.gguf

大小 (KB): 9661410

类型: Model

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

Flux Heavy 17B

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/1161331.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录