
最近在尝试将提取出来的LORA信息集合起来,但失败了,最初我设想制作一个类似于DLC一样的扩展模型用于将提取信息传输给其余模型,只是最大的LORA信息保存数量始终无法突破到5个以上,
不过如果你降低预期的话,这个混合方式可以说是完美的,只是它的混合过程较为繁琐,而且目前仅适合SD1.5,
之后会分享研究结果以及混合流程,
不过,我并不知道我做的一切是错的,还是对的,
我希望我可以解决这一切,但这太难了,我不知道我做的这些是否有意义,甚至怀疑自己是在浪费时间,最糟糕的是我也没有面对这一切的勇气,或许这会是我人生中最大的网络黑历史,总之再坚持一下吧
Recently, I have been trying to consolidate the extracted Lora information, but I have failed. Initially, I envisioned creating an extension model similar to a DLC to transmit the extracted information to the other models, but the maximum number of Lora information that can be saved has always been unable to exceed 5.
However, if you lower your expectations, this mixing method can be considered perfect. It is just that its mixing process is relatively complicated, and currently, it is only suitable for sd1.5
Subsequently, the research results and the mixed process will be shared
However, I do not know whether everything I do is wrong or right
I wish I could resolve all of this, but it is too difficult. I do not know if what I am doing is meaningful, and I even doubt whether I am wasting my time. The worst part is that I do not have the courage to face all of this, and perhaps this will become the biggest online disgrace of my life. In any case, I will hold on a little longer
你们觉得模型混合需要什么呢,我想答案只有两个,模型和混合方式,哈哈,
如果缺少了足够多样性的模型,那么什么样的混合方式都等同无效,
同样,如果没有高效的混合方式,
好了,只是一些牢骚罢了
What do you think is needed for model blending? I think there are only two answers: the model and the blending method, haha
If there is a lack of sufficiently diverse models, then any form of mixing is equally ineffective
Similarly, if there is no efficient mixing method
Alright, it's just some complaints
以下是一些研究成果的演示页面
Here are some demonstration pages of research findings
模型拆解研究页面 Model Deconstruction Research Page
https://civitai.com/models/579793
LORA提取演示页面 Lora extraction demonstration page
https://civitai.com/models/1086230
描述:
虽然听着很荒谬,但它确实做到了理论上模型混合该获得的效果。
但随着混合LORA数量的增多,已混合的LORA的效果便会随着下降。
或许早期训练的LORA并不适合这种混合方式。
Although it sounds absurd, it indeed achieves the effects that the theoretical model mixing should obtain
However, as the number of mixed LoRA increases, the effectiveness of the already mixed LoRA will decline.
Perhaps the early trained LoRA is not suitable for this hybrid approach
LORA:https://civitai.com/models/14517?modelVersionId=17096
https://civitai.com/models/39253?modelVersionId=45171
训练词语:
名称: kmMIXUnexpected_overFittingMIX.safetensors
大小 (KB): 2414699
类型: Model
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success