
Trained using using Danbooru images as the Dataset. Something I did sort of as a test, but I like how it turned out. It's particularly good at rendering Korra from the Legend of Korra. It seems the more descriptive the prompt the better the results. I used Chat GPT to generate some of the prompts in my examples because I'm lazy.
I noticed that the full negative prompts are not showing in my samples. Here is the full negative prompt. I think I used it for everything.
Negative prompt: muscles, deformed, mutation, bad anatomy, body out of frame, ((distorted and misshapen)), (cross-eyed and unappealing), (closed eyes and dull), blurry and indistinct, (bad anatomy and unflattering), ugly and unattractive, disfigured and grotesque, ((poorly drawn face and lack of detail)), (mutation and abnormalities), (mutated and deformed), (extra limbs and disproportionate), (bad body and awkward proportions), (unflattering clothing and unappealing colors), (unattractive hairstyle and messy), (bad lighting and unflattering), (lack of facial expressions and dull)
描述:
Examples use "Anything-V3.0-pruned.vae" and the "R-ESRGAN 4X+ Anime B" upscaler opton. I generated 6 images per batch.
This is 50/50 weighted sum merge between V1 and a 27k step model I trained on the same image set as V1 but used different captions. The 27k model was making nice results, but I feel like this merge is better. Perhaps other factors are in play, but I think the images look cleaner than V1.
训练词语:
名称: owlerartStyle_v2.safetensors
大小 (KB): 4001747
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success