
Full Checkpoint with improved TE do not load additional CLIP/TE
FLUX.1 (Base UNET) + Google FLAN
NF4 is my recommended model for quality/speed balance.
This model took the 42GB FP32 Google Flan T5xxl and quantized it with improved CLIP-L for Flux. To my knowledge no one else has posted or attempted this.
-
Quantized from FP32 T5xxl (42GB 11B Parameter)
-
Base UNET no baked lora's or other changes
-
Full FP16 version is available.
-
NF4 Full checkpoint is ready to use in Comfy with NF4 loader or natively in Forge (Forge has Lora Support and Comfy is taking 10x longer then Forge per IT - I prefer comfy but the NF4 support is garbage)
-
FP8 version recommended for comfy just use standard checkpoint loader. (NF4 is recommended for Forge as it looses less in Quantitation)
Again Do not load a separate VAE, CLIP or TE - FP32 Quantized versions baked in.
Per the Apache 2.0 license FLAN is attributed to Google
描述:
训练词语:
名称: fluxDevSchnellBaseUNET_fluxDevFLANNF4.safetensors
大小 (KB): 16579357
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success