
Work in progress
The models with ? mean that are archived and i have yet to update
Check the About this Version of the chose workflow for proper introduction
question, how do i embed the nodes into the images or videos?
if there is a way to load OmniGen/CogVideoX/LLM/t2a/AnimateDiff(gpu) thru a CustomSamplerAdvanced let me know please
描述:
I generate the input image with a pdxl model but you can use your favorite t2i, i use v-prediction for maximize creativity, i'm using my favorite noise chain for t2i thats 1 step fe_heun3, 1 step SamplerSonarDPMPPSDE(student-t), 2 steps lcm (uniform), for better first step you can use the SamplerDPMAdaptative node thats left alone, it's optimized to go fast, but you can play with it, for the second step you can prolonge the lcm(uniform) for more smooth results but less creative or add a SamplerRES_Momentumized(highress-pyramid) and finish with 2 steps of lcm(uniform), you can also try the ClownSampler node for step 2 to get a different result, the lcm(uniform) can also be changed for the ClownSampler but i really like what lcm(uniform) does. Now for the ltx video, you don't require the sampler chain, but if you want the best from the model, experimenting is your best bet, also, the cfg modulates the movement, the consistency and the artifacts, you may as well experiment with different cfgs for each 1/3 of the generation, thats also a reason for the split sigmas, that improve the generation by a lot
训练词语:
名称: 4Or9StepsSamplerChainsNoiseTypes_Ltx.zip
大小 (KB): 13
类型: Archive
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success