Flux Blockwise版本CLIP_L_Large_BF16 (ID: 1112356)

Flux Blockwise版本CLIP_L_Large_BF16 (ID: 1112356)

Flux Blockwise (Mixed Precision Model)

I had to build several custom tools to allow for the mixed precision model, to my knowledge it is the first built like this.

  • Faster and more accurate then any other FP8 quantized model currently available

  • Works in Comfy and Forge but forge needs to be set to BF16 UNET

  • Comfy load as a diffuser model USE DEFAULT WEIGHT

  • FP16 Upcasting should not be used unless absolutely necessary such as running CPU or IPEX

  • FORGE - set COMMANDLINE_ARGS= --unet-in-bf16 --vae-in-fp32

  • Other then the need to force forge into BF16, (FP32 VAE optionally) it should work the same as the DEV model with the added benefit of being 5GB smaller then the full BF16

It turns out that every quantized model including my own up to this point to my knowledge has been built UN-optimally per blackforest.

Only the UNET blocks should be quantized in the diffuser model, also they should be upcast to BF16 and not FP16 (Comfy does this correctly)


Hippo Image remix

Lion Image remix

I am currently trying to workout how to follow Blackforest recommendations but using GGUF

描述:

训练词语:

名称: fluxBlockwise_clipLLargeBF16.safetensors

大小 (KB): 835261

类型: Model

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

Flux Blockwise

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/1145154.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录