simple and efficient model mixing method版本Sample 5 Art style Stack (ID: 1231245)

simple and efficient model mixing method版本Sample 5 Art style Stack (ID: 1231245)

简单高效的进行模型混合,

这是我自己研究出来的模型混合方式,总之先分享下我的发现,我也好奇你们混合模型的方法,

Simple and efficient model blending

This is a model blending method I developed myself. In short, I just wanted to share my findings, and I'm also curious about your methods for blending models

混合原理利用了模型混合的过拟合现象,并在此基础上进行了深入研究,

因此这个混合方式很可能是通用于所有AI模型,希望如此,但显然要验证这些有点困难,

The mixing principle makes use of the over-fitting phenomenon of model mixing, and on this basis, it is deeply studied.

So this hybrid approach is likely to be common to all AI models. I hope so, but it's obviously a little hard to verify.

在我的设想中,这个混合方式可以解决困扰模型混合最大的问题,素材的缺失,

In my vision, this hybrid approach can address the biggest issue troubling model mixing, which is the lack of materials

研究的阻力好多啊,版本落后、冷门领域、没有参考、素材有限、没有支持、甚至有一些发现和猜想还没有人提出,目前最大的问题反而不是时间、和个人能力,总之你们有兴趣尝试一下吗,探索未知可是很有趣的哦,

There are many obstacles in research, such as outdated versions, niche fields, lack of references, limited materials, no support, and even some discoveries and hypotheses that no one has proposed yet. The biggest issues right now are not time or personal ability. In any case, are you interested in giving it a try? Exploring the unknown can be quite fun

现在补充一些相关的研究信息

I will now provide some additional research information

LORA的混合权重

绝大部分LORA的最大权重在50~100这个阶段,超过这个阶段会出现数值溢出,因此需要开启数值溢出检测(nan-check)

Lora's mixed weights

The maximum weight of most Lora is in the range of 50 to 100. Exceeding this range may lead to numerical overflow, so it is necessary to enable numerical overflow detection (nan-check)

LORA的权重阶段

随着LORA权重数值的提高,LORA会出现以下几个阶段的变化,

1.模型信息主导,

2.LORA信息主导,模型信息补充,

3.LORA信息主导,

4噪点

5数值溢出

The weight stage of Lora

As the Lora weight values increase, Lora will undergo the following stages of change

1. Model information is dominant

2. Lora information is dominant, while model information is supplementary

3. Lora is information-driven

4 noise points

5 numerical overflow

每一个LORA都有不同的最大权重,我将这个最大权重称作学习深度,

Each Lora has a different maximum weight, which I refer to as learning depth

我现在解释混合细节

I will now explain the mixed details

1.LORA权重为什么要设置这么高

LORA的权重代表在混合时的模型占比,调高是为了避免多余的模型影响

1.Why do Lora weights need to be set so high

The weight of Lora represents the proportion of the model during mixing, and increasing it is to avoid the influence of unnecessary models

2.模型混合的α数值

α=-1 这是用来获取LORA信息,α=-2 则是反转了获得的LORA信息,

2.The α value for model mixing

α = -1 is used to obtain lora information. α = -2 reverses the obtained lora information

3.如何在原有基础上再次进行混合

将获得的模型进行活化处理,此过程需要13次混合,之后便可以添加下一个LORA

Activation the obtained model, which requires 13 mixing. After that, you can add the next LoRA.

4.混合的缺陷

由于这种混合方式是获取了LORA的全部信息,所以混合的LORA越多,效果越不稳定,目前我没有找到解决方法,

4. Mixed defects

Since this mixing method captures all the information from Lora, the more mixed Lora there is, the less stable the effect becomes. I haven't found a solution for this yet

5. 是否还有其他混合方式可以做到这种效果

其他的混合方式也可以做到这种效果,但只有这个方法的下限是最高的

5. Are there any other mixing methods that can achieve this effect

Other mixing methods can achieve this effect as well, but only this method has the highest lower limit

6.这个混合方式的原理是什么

我通过大量过拟合混合找到了一些规律,但原理仍然未知,甚至难以解释,简单来讲,便是通过一些特殊的模型来提取其他模型的信息,在对提取到的模型进行处理,

所以为什么会出现这些现象呢,我只能回答,我不知道,

6.What is the principle behind this mixing method

I have found some patterns through extensive overfitting mixing, but the principles remain unknown and are even difficult to explain. In simple terms, it involves using certain special models to extract information from other models and then processing the extracted models.

So why do these phenomena occur? I can only answer, I don't know

之后你们会获得很多对LORA的新认知,总之我的研究已经进入了停滞,并且很难会有突破了,

You will gain a lot of new insights about Lora. In short, my research has reached a standstill, and it's unlikely to have any breakthroughs.

以下是其余研究的关联页面,因为页面篇幅有限,其余信息将在别的页面发布,目前还在编辑中

Here are the related pages for the remaining research. Due to space limitations, the rest of the information will be published on other pages. It is still being edited

模型拆解研究页面 Model Deconstruction Research Page

https://civitai.com/models/579793

使用此方法进行模型优化的演示页面

Demonstration page for model optimization using this method

https://civitai.com/models/822917

描述:

https://civitai.com/models/12427/old-school-shoujo

权重150 Weight 150

https://civitai.com/models/6304?modelVersionId=7387

权重650 Weight 650

原计划上传一个训练错误的LORA混合样本,不过这个组合也很不错。

Noob-leyon LORA训练时出现了错误,我会上传一个图片进行演示。这个现象其实很常见,大部分的LORA都不满足混合的最低标准,据我观察合格率大概有百分之一。

以下是混合中LORA的常见表现

LORA未完整学习,LORA学习深度不足,LORA错误学习。还有一部分LORA训练标注有问题,总之,早期LORA存在很多很多的不足,我希望现在的LORA已经有所改善。

但是在那之前我需要想办法解决提示词标注同类化造成的混合稀释,嗯看起来完全不可能做到。

I originally planned to upload a training error lora mixed sample, but this combination is also quite good

noob - an error occurred during the training of leyon lora, and I will upload an image to demonstrate. This phenomenon is actually quite common; most lora do not meet the minimum standards for mixing. From my observations, the pass rate is about 1 percent.

Here are the common manifestations of mixed Lora.

Lora has not fully learned, the depth of Lora's learning is insufficient, and there are errors in Lora's learning. Additionally, some of the training annotations for Lora have issues. In short, there were many shortcomings in the early stages of Lora, and I hope that the current version of Lora has improved.

But before that, I need to find a way to address the mixed dilution caused by the homogenization of prompt labeling, which seems completely impossible to achieve

训练词语:

名称: simpleAndEfficient_sample5ArtStyleStack.safetensors

大小 (KB): 2414699

类型: Model

Pickle 扫描结果: Success

Pickle 扫描信息: No Pickle imports

病毒扫描结果: Success

simple and efficient model mixing method

simple and efficient model mixing method

simple and efficient model mixing method

资源下载
下载价格VIP专享
仅限VIP下载升级VIP
犹豫不决让我们错失一次又一次机会!!!
原文链接:https://1111down.com/1184169.html,转载请注明出处
由于网站升级,部分用户密码全部设置为111111,登入后自己修改, 并且VIP等级提升一级(包月提升至包季,包季提升到包年 包年提升至永久)
没有账号?注册  忘记密码?

社交账号快速登录