
I haven't seen this uploaded on here.
This is the scaled fp8 FLUX.1 [dev] model uploaded to HuggingFace by comfyanonymous. It should give better results than the regular fp8 model, much closer to fp16, but runs much faster than Q quants. Works with the TorchCompileModel node.
The fp8 scaled checkpoint is a slightly experimental one that is specifically tuned to try to get the highest quality while using the fp8 matrix multiplication on the 40 series/ada/h100/etc... so it will very likely be lower quality than the Q8_0 but it will inference faster if your hardware supports fp8 ops.
From HuggingFace :
Test scaled fp8 flux dev model, use with the newest version of ComfyUI with weight_dtype set to default. Put it in your ComfyUI/models/diffusion_models/folder and load it with the "Load Diffusion Model" node.
描述:
Original v1 model was broken. Test v2 has been updated and fixed.
训练词语:
名称: flux1DevScaledFp8_v2.safetensors
大小 (KB): 11622664
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success