Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8版本LoCon (ID: 20497)

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8版本LoCon (ID: 20497)

This post contains three types of fine-tuned results with the same dataset: LoHa, LoCon, and full checkpoint (all with clip skip 1).

Associated hugging face repository: https://huggingface.co/alea31415/roukin8-characters-partial (more versions can be found there).

Note that the models are fine-tuned on top of BP, so you get the style I show below by using BP. Using these models on orange could kill the orange style unless using with a properly adjusted weight.

What is LoHa/LoCon?

LoHa: Lora with hadamard decomposition (ref https://arxiv.org/abs/2108.06098)

LoCon: Lora extended to residual blocks

See https://github.com/KohakuBlueleaf/LyCORIS/blob/lycoris/Algo.md for more explanation

How to use?

For webui install https://github.com/KohakuBlueleaf/a1111-sd-webui-locon and use it as a lora

Why LoHa?

The following comparison should be quite convincing.

(Left 4 LoHa with different number of training steps; Right 4 same thing but for LoCon; same seed same training parameters for the two training)

In addition to the five characters I also train a bunch of style into the model (what I have always been doing actually).

However, LoRa and LoCon do not combine styles with characters that well (characters are only trained with anime images) and this capacity gets largely improved in LoHa.

Note that the two files have almost the same size (around 30mb). For this I set (linear_dim, conv_dim) to (16,8) for LoCon and (8,4) for LoHa. However with Hadamard product the resulting matrix could now be of rank 8x8=64 for linear layers and 4x4=16 for convolutional layers.

P.S. Note that the 20000 step files are provided here so if you want other version please check hugging face

How about full checkpoint?

It is still the most performing as illustrated below. Maybe we can get such results with LoHa with larger dimension, better decomposition of the convolutional layers, and by adding back the few layers that are not yet currently trained in the LoCon/LoHa implementation. Let's see.

Training details

  • LoCon/LoHa: alpha 1, dimensions as specified above, batch size 8, learning rate 2e-4 throughout with constant scheduler, AdamW

  • Full checkpoint: batch size 8, learning rate 2.5e-6 with cosine scheduler, Adam8bit, conditional dropout 0.08

Example Images

Images 1 and 4 come from LoCon, 2 and 3 come from LoHa, and 5 and 6 come from full checkpoint.

Dataset

I have uploaded the anime part dataset of 3000 anime screenshots. The current format works with Everydream with the multiply.txt and for kohya trainer you should use this script https://github.com/cyber-meow/anime_screenshot_pipeline/blob/develop/utilities/flatten_folder.py

python flatten_folder.py --src_dir /path/to/dir

描述:

训练词语: yamanomitsuha,sabine ,colette,beatrice,adelaide,edstyle

名称: roukin-locon-step-20000.safetensors

大小 (KB): 30456

类型: Model

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8

Roukin8-character-LoHa/LoCon/FullCkpt ろうきん8

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/955500.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录