![Large Rank LoRA - Experiments towards full finetuning版本Fantasy Anime v1.0 [Pony] (ID: 764569)](https://image.1111down.com/xG1nkqKTMzGDvpLrqFT7WA/124fb1ff-cbe4-48dc-9fa8-aff382162cb1/width=450/25992027.jpeg)
Large Rank LoRA - Experiments towards full finetuning
What is this?
This is a collection of large rank (dimension), large dataset LoRA's which are mostly for testing full-finetuning settings with combined datasets which have been used for individual LoRA's. (see each version notes for lists)
Goal is to land on a single dataset and training settings that can
-
Express all the concepts well in a single checkpoint/lora
-
Improve the base output quality aesthetics and anatomy
Then I will use that dataset and training settings for new full-finetune models.
Who is this for?
If you enjoy any of the LoRA's datasets included, you might enjoy the combined results of these as well - in testing the individual concepts/characters are also still fairly well respresented so far. (See this post for comparisons)
These might also be useful to other creators for use in model mixing/finetuning, I've trained on base Pony for compatibility with the largest amount of merge mixes.
Datasets
Datasets included have:
-
Manual curating of quality and aesthetic tags:
-
masterpiece, best quality, low quality, very aesthetic, aesthetic
-
-
Around ~1000 images each
e.g. Fantasy Anime v.1 [Pony] version includes 3374 images from LoRA datasets:
Recommended prompt structure:
Positive prompt:
{{tags}}
score_9, score_8_up, score_7_up, score_6_up, absurdres, masterpiece, best quality, very aesthetic
Negative prompt:
(worst quality, low quality:1.1), score_4, score_3, score_2, score_1, error, bad anatomy, bad hands, watermark, ugly, distorted, signature
描述:
Datasets (from existing LoRA's):
Prompts and previews from these pages can be referenced for more of what this LoRA has learned
Total 3374 images across:
Training settings (Kohya):
I used a batch size of 10, which was right at the limit of what I could push it to while maintaining a s/it under 3.
After epoch 20, s/it gradually increased, which resulted in a total training time of 36hrs:
epoch 21/30
steps: 70%|██████████████████████████████████▎ | 7287/10410 [7:09:19<3:03:59, 3.53s/it, avr_loss=0.0897]
epoch 22/30
steps: 73%|███████████████████████████████████▏ | 7634/10410 [12:11:19<4:25:56, 5.75s/it, avr_loss=0.0898]
epoch 23/30
steps: 77%|████████████████████████████████████▊ | 7981/10410 [16:27:01<5:00:23, 7.42s/it, avr_loss=0.0887]
epoch 24/30
steps: 80%|██████████████████████████████████████▍ | 8328/10410 [19:53:56<4:58:29, 8.60s/it, avr_loss=0.0878]
epoch 25/30
steps: 83%|████████████████████████████████████████ | 8675/10410 [22:45:11<4:33:02, 9.44s/it, avr_loss=0.0891]
epoch 26/30
steps: 87%|█████████████████████████████████████████▌ | 9022/10410 [25:32:23<3:55:45, 10.19s/it, avr_loss=0.0892]
epoch 27/30
steps: 90%|███████████████████████████████████████████▏ | 9369/10410 [28:18:39<3:08:44, 10.88s/it, avr_loss=0.0858]
epoch 28/30
steps: 93%|████████████████████████████████████████████▊ | 9716/10410 [31:04:01<2:13:08, 11.51s/it, avr_loss=0.0892]
epoch 29/30
steps: 97%|█████████████████████████████████████████████▍ | 10063/10410 [33:51:14<1:10:02, 12.11s/it, avr_loss=0.0874]
epoch 30/30
steps: 100%|█████████████████████████████████████████████████| 10410/10410 [36:41:58<00:00, 12.69s/it, avr_loss=0.0894]2
Kohya Config:
I don't recommend these settings for a 24gb VRAM system (reduce to batch size of 8 at least):
{
"LoRA_type": "Standard",
"LyCORIS_preset": "full",
"adaptive_noise_scale": 0,
"additional_parameters": " --optimizer_args \"decouple=True\" \"weight_decay=0.5\" \"betas=0.9,0.99\" \"use_bias_correction=False\" --lr_scheduler_type \"CosineAnnealingLR\" --lr_scheduler_args \"T_max=30\"",
"async_upload": false,
"block_alphas": "",
"block_dims": "",
"block_lr_zero_threshold": "",
"bucket_no_upscale": true,
"bucket_reso_steps": 256,
"bypass_mode": false,
"cache_latents": true,
"cache_latents_to_disk": true,
"caption_dropout_every_n_epochs": 0,
"caption_dropout_rate": 0,
"caption_extension": ".txt",
"clip_skip": 2,
"color_aug": false,
"constrain": 0,
"conv_alpha": 1,
"conv_block_alphas": "",
"conv_block_dims": "",
"conv_dim": 1,
"dataset_config": "",
"debiased_estimation_loss": false,
"decompose_both": false,
"dim_from_weights": false,
"dora_wd": false,
"down_lr_weight": "",
"dynamo_backend": "no",
"dynamo_mode": "default",
"dynamo_use_dynamic": false,
"dynamo_use_fullgraph": false,
"enable_bucket": true,
"epoch": 30,
"extra_accelerate_launch_args": "",
"factor": -1,
"flip_aug": false,
"fp8_base": false,
"full_bf16": true,
"full_fp16": false,
"gpu_ids": "",
"gradient_accumulation_steps": 1,
"gradient_checkpointing": true,
"huber_c": 0.1,
"huber_schedule": "snr",
"huggingface_path_in_repo": "",
"huggingface_repo_id": "",
"huggingface_repo_type": "",
"huggingface_repo_visibility": "",
"huggingface_token": "",
"ip_noise_gamma": 0,
"ip_noise_gamma_random_strength": false,
"keep_tokens": 0,
"learning_rate": 1,
"log_tracker_config": "",
"log_tracker_name": "",
"log_with": "",
"logging_dir": "E:/work/LoRa_work/logging",
"loss_type": "l2",
"lr_scheduler": "cosine",
"lr_scheduler_args": "",
"lr_scheduler_num_cycles": 1,
"lr_scheduler_power": 1,
"lr_warmup": 0,
"main_process_port": 0,
"masked_loss": false,
"max_bucket_reso": 2048,
"max_data_loader_n_workers": 0,
"max_grad_norm": 1,
"max_resolution": "1024,1024",
"max_timestep": 1000,
"max_token_length": 225,
"max_train_epochs": 30,
"max_train_steps": 0,
"mem_eff_attn": false,
"metadata_author": "motimalu",
"metadata_description": "",
"metadata_license": "",
"metadata_tags": "",
"metadata_title": "",
"mid_lr_weight": "",
"min_bucket_reso": 512,
"min_snr_gamma": 5,
"min_timestep": 0,
"mixed_precision": "bf16",
"model_list": "custom",
"module_dropout": 0,
"multi_gpu": false,
"multires_noise_discount": 0.3,
"multires_noise_iterations": 6,
"network_alpha": 128,
"network_dim": 128,
"network_dropout": 0,
"network_weights": "",
"noise_offset": 0,
"noise_offset_random_strength": false,
"noise_offset_type": "Multires",
"num_cpu_threads_per_process": 2,
"num_machines": 1,
"num_processes": 1,
"optimizer": "Prodigy",
"optimizer_args": "",
"output_dir": "D:/model_output",
"output_name": "fantasy_superlora_v1",
"persistent_data_loader_workers": false,
"pretrained_model_name_or_path": "C:/workspace/webui_forge_cu121_torch21/webui/models/Stable-diffusion/ponyDiffusionV6XL_v6StartWithThisOne.safetensors",
"prior_loss_weight": 1,
"random_crop": false,
"rank_dropout": 0,
"rank_dropout_scale": false,
"reg_data_dir": "",
"rescaled": false,
"resume": "",
"resume_from_huggingface": "",
"sample_every_n_epochs": 10,
"sample_every_n_steps": 0,
"sample_prompts": "score_9, score_8_up, score_7_up, score_6_up, 1girl, frieren, white hair, twintails, green eyes, earrings, white capelet, masterpiece, best quality, very aesthetic",
"sample_sampler": "euler_a",
"save_every_n_epochs": 30,
"save_every_n_steps": 0,
"save_last_n_steps": 0,
"save_last_n_steps_state": 0,
"save_model_as": "safetensors",
"save_precision": "bf16",
"save_state": false,
"save_state_on_train_end": false,
"save_state_to_huggingface": false,
"scale_v_pred_loss_like_noise_pred": false,
"scale_weight_norms": 0,
"sdxl": true,
"sdxl_cache_text_encoder_outputs": false,
"sdxl_no_half_vae": true,
"seed": 339491249,
"shuffle_caption": false,
"stop_text_encoder_training_pct": 0,
"text_encoder_lr": 1,
"train_batch_size": 10,
"train_data_dir": "C:/data/superlora_fantasy",
"train_norm": false,
"train_on_input": true,
"training_comment": "",
"unet_lr": 1,
"unit": 1,
"up_lr_weight": "",
"use_cp": false,
"use_scalar": false,
"use_tucker": false,
"v2": false,
"v_parameterization": false,
"v_pred_like_loss": 0,
"vae": "",
"vae_batch_size": 0,
"wandb_api_key": "",
"wandb_run_name": "",
"weighted_captions": false,
"xformers": "xformers"
}
训练词语:
名称: fantasy_superlora_v1.safetensors
大小 (KB): 891505
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success