KM-MIX Unexpected product版本Efficient learning (ID: 1067590)

最近在尝试将提取出来的LORA信息集合起来,但失败了,最初我设想制作一个类似于DLC一样的扩展模型用于将提取信息传输给其余模型,只是最大的LORA信息保存数量始终无法突破到5个以上,

不过如果你降低预期的话,这个混合方式可以说是完美的,只是它的混合过程较为繁琐,而且目前仅适合SD1.5,

之后会分享研究结果以及混合流程,

不过,我并不知道我做的一切是错的,还是对的,

我希望我可以解决这一切,但这太难了,我不知道我做的这些是否有意义,甚至怀疑自己是在浪费时间,最糟糕的是我也没有面对这一切的勇气,或许这会是我人生中最大的网络黑历史,总之再坚持一下吧

Recently, I have been trying to consolidate the extracted Lora information, but I have failed. Initially, I envisioned creating an extension model similar to a DLC to transmit the extracted information to the other models, but the maximum number of Lora information that can be saved has always been unable to exceed 5.

However, if you lower your expectations, this mixing method can be considered perfect. It is just that its mixing process is relatively complicated, and currently, it is only suitable for sd1.5

Subsequently, the research results and the mixed process will be shared

However, I do not know whether everything I do is wrong or right

I wish I could resolve all of this, but it is too difficult. I do not know if what I am doing is meaningful, and I even doubt whether I am wasting my time. The worst part is that I do not have the courage to face all of this, and perhaps this will become the biggest online disgrace of my life. In any case, I will hold on a little longer

你们觉得模型混合需要什么呢,我想答案只有两个,模型和混合方式,哈哈,

如果缺少了足够多样性的模型,那么什么样的混合方式都等同无效,

同样,如果没有高效的混合方式,

好了,只是一些牢骚罢了

What do you think is needed for model blending? I think there are only two answers: the model and the blending method, haha

If there is a lack of sufficiently diverse models, then any form of mixing is equally ineffective

Similarly, if there is no efficient mixing method

Alright, it's just some complaints

以下是一些研究成果的演示页面

Here are some demonstration pages of research findings

模型拆解研究页面 Model Deconstruction Research Page

https://civitai.com/models/579793

LORA提取演示页面 Lora extraction demonstration page

https://civitai.com/models/1086230

描述:

让模型记住了以下几个LoRA

The model has remembered the following several LoRA.

更新间隔的时间实在是太长了,但我也是刚刚得到了结果。虽然模型有点粗糙,但之后我会进行优化以解决提示词冲突和稀释。

The update interval is indeed too long, but I have just completed the mixed experiment. Although the model is somewhat rough, I will optimize it later to address prompt conflicts and dilution

这种混合方式十分高效,能够百分百的获得LORA的所有效果。但糟糕的是大部分LORA在训练时使用的了高度相似提示词,这导致了混合次数越多结果越难控制。我并不知道现在的LORA怎么样,因为我还有1万多早期LORA还没有测试。还是别让我的素材库增加了

This mixing method is highly efficient, capable of achieving 100% of all effects of Lora. However, the downside is that most Lora used highly similar prompts during training, which makes the results increasingly difficult to control as the number of mixes increases. I am not sure how the current Lora is, as I still have over 10,000 early Lora that have not been tested. It is better not to let my material library increase.

https://civitai.com/models/80023

混合权重10.0 mix weight 10.0

https://civitai.com/models/147176

混合权重9.0 mix weight 9.0

https://civitai.com/models/57525

混合权重4.0 mix weight 4.0

训练词语:

名称: kmMIXUnexpected_efficientLearning.safetensors

大小 (KB): 2414699

类型: Model

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/1141753.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录