
Model Training Summary:
-
Dataset: Continuing to gather for next round
-
Captioning: Generated using Koyha_SS with GIT captioning.
-
Training Platform: Civit’s onsite trainer, targeting 1,050 steps for this round.
Training Configuration:
-
Engine: Kohya
-
Learning Rates:
-
UNet LR: 0.0005
-
Text Encoder LR: 0 (no adjustment)
-
-
Clip Skip: 1
-
LoRA Configuration:
-
Type: LoRA
-
Network Dimensions: 2
-
Alpha: 16
-
-
Training Details:
-
Repeats: 30
-
Resolution: 1024
-
Batch Size: 4
-
Max Epochs: 5
-
LR Scheduler: Cosine with Restarts (3 cycles)
-
Min SNR Gamma: 5 <---- I find this important
-
Noise Offset: 0.1
-
Bucket Enabled: True
-
Optimizer: AdamW8Bit
-
-
Augmentation: No flip augmentation; captions were not shuffled.
Notes:
-
Resolution 1024, allowing for higher fidelity image training.
-
Repeats 30, potentially boosting the model’s understanding of the dataset.
-
A shorter target step count of 1,050 is being used for this version
Next Steps:
-
Version 2 in progress gathering dataset/editing/updating
描述:
First Version in Flux, seems to be decent."
Recommend Flux Dev f8
训练词语: b3lla_flux
名称: b3lla_flux.safetensors
大小 (KB): 18808
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success