SD1.5 Direct Preference Optimization - DPO版本v1.0 (ID: 271743)

SD1.5 Direct Preference Optimization - DPO版本v1.0 (ID: 271743)

Not my model, from the huggingface repo. This is an excellent merge model, particularly in the middle blocks. Try it yourself - take your favorite model, and block merge this at about 10% input, and 20% middle, and adjust from there.

Original U-Net: https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1

bdsqlz's release: https://huggingface.co/bdsqlsz/dpo-sd-text2image-v1-fp16

bdsqlz released the sdxl model here: https://civitai.com/models/237681/dpo-sdxl-fp16 but us poor 1.5 users were left in the dark ages.

I had to do some hacking to get the fp32 version, so you will have to bring your own VAE.

Diffusion Model Alignment Using Direct Preference Optimization

Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check paper at Diffusion Model Alignment Using Direct Preference Optimization.

SD1.5 model is fine-tuned from stable-diffusion-v1-5 on offline human preference data pickapic_v2.

SDXL model is fine-tuned from stable-diffusion-xl-base-1.0 on offline human preference data pickapic_v2.

描述:

训练词语:

名称: ClearVAE_V2.3_fp16.safetensors

大小 (KB): 163411

类型: VAE

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

名称: sd15DirectPreference_v10.safetensors

大小 (KB): 4004054

类型: Model

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

SD1.5 Direct Preference Optimization - DPO

SD1.5 Direct Preference Optimization - DPO

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/1002786.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录