
17-01-2025 START
Ok, so the newest astigmatism positive, +0.6 is here. It's really really good, but as in all things, I recommend blending with 0.5 to attenuate overfitting and truly get the best results possible. I'll look at a lora merge later and see if I can make an easy package with an "optimal" astigmatism at this stage.
Hope you all enjoy. I'm working on a really large negative for 0.6, but I need more buzz so it will take a little bit to train, but rest assured, it is on the way, and I think it will be quite a big jump.
17-01-2025 END
---
Post 0.5b, I recommend just playing with 0.5b and/or -0.5b.
When using the negative, be sure to crank CFG up to start, as this is the main advantage it affords you.
It also, in small amounts, can increase creativity, but broadly, +0.5b is the powerhouse, despite having a much smaller dataset
below is stuff I wrote previously for anything pre 0.5b:
---------------------
I recommend the following mix for anyone starting out (I will release some sort of a mixed LoRa sometime in the next week that will require less VRAM than loading 4 LoRas lol)
Astigmatism +0.5
Astigmatism -0.5
Astigmatism +0.4b
Astigmatism -0.2
The +'s at 0.33 each
The -'s at -0.33 each
This has to do with overfitting in the training process, and errors on my part. Rather than address these errors with limited resources directly (which I cannot do as this would require many many iterations of the LoRas that I cannot afford, in order to test and find the optimal setups) using blends mitigates overfitting and generally improves perfermance, as you can see from the plethora of merge Checkpoints on Civitai, including ones which simply merge newer versions of a model into the older version.
Basically, older versions may "understand" something better than a newer version, and vice versa, but as long as your version are MOSTLY improved, then the merge process will over time lead to the model becoming a better generalizer, and this particular lora, which directly target generalization and capabilities of the model, is no exception.
Love yall, and this community.
If anyone wants to collaborate on further training who has the resources, please contact me. I have had a great deal of success in improving prompt adherence and I suspect this can be massively grown with a solid community effort.
Carefully examine the weights used to know how to mess with this LoRa. Think of it like adjusting the fooocu on a lens that you are looking through. Every prompt and Checkpoint combination will have different needs, but ultimately, most of them can be dialed in such that adherence begins to work within a certain delimiter where it wasn't working previously.
I suppose I will have to do a video on the "Why" behind this soon as my adhd and time constraints make writing it, as I want to, beyond my current capacity. But a video, I probably can do, although it will be... chaotic.
This model is based on work I did on my "Unsettling" Lora. It uses some of the images generated there, along with subsequent images made using, again, the LoRa progeny of those, as well as the techniques I experimented with.
Basically, the goal of this LoRa is to "semantically shift" SDXL such that terms that have a set meaning are entirely changed in an internally consistent manner. I used a technique to do this partially in the Unsettling LoRa, although it was overtrained, and became intrigued by the idea that "good" prompts remain "good," albeit on a different axis, even if the internal understanding of them "shifts" within a given model. In other words: a unique and interesting prompt can create unique and interesting images in multiple new and unique themes if you play with the brain of the model in a directed way.
How did I do this?
I found areas of overtraining within SDXL and targeted them. Mona Lisa, Pillars of Creation, etc, and I redirected them to new images. As I suspected, this had ripple effects in the way the entire model perceives the concepts connected to the images modified, and these effects are quite substantial.
UPDATE
Since this started, the purpose of this LoRa has changed substantially to basically helping improve SDXL's overall prompt adherence and winrate, while using very small training datasets targeting the areas of overfitting in the model and teaching it to generalize them.
A side effect of this is that it is a lot easier to produce images at arbitrary resolutions.
描述:
Expanded the dataset for training by 50%, and used several training methods, using a preference based merge as the final lora.
训练词语:
名称: Astigmatism_-0.6b.safetensors
大小 (KB): 1558807
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success