
This Model allows you, to use the FLUX base model in ComfyUI as follows:
-
Choose between img2img and txt2img generation
-
Use LLM conditioning (requires ollama and its models /nodes)
-
Use a modifier for prompts using LLM
You can use many combinations of it.
You can specify different resolutions for img2img and txt2img. The main setting uses a definition of the size in MPixels and the Aspect Ratio, the other values will be generated automatically.
You can specify a custom prompt, which will be used, if LLM is not selected.
You can specify a seed value. Open the seed node in the image input group to change the settings.
The workflow is based on "ComfyUI: Flux with LLM, 5x Upscale (Workflow Tutorial)".
I've optimized it for better readability (at least for me), shorter logic and combined main settings in one group. I left away the upscaling part.
My main motivation to do this model was the learning experience, how to work in ComfyUI.
Links for Models:
Flux.1 [dev]: https://huggingface.co/black-forest-l...
Flux.1 [schnell]: https://huggingface.co/black-forest-l...
t5xxl: https://huggingface.co/comfyanonymous...
ControlAltAI Nodes: https://github.com/gseth/ControlAltAI...
CivtiAI LoRA Used:
https://civitai.com/models/562866?mod...
https://civitai.com/models/633553?mod...
Ollama:
llama3.1: https://ollama.com/library/llama3.1
llava-llama3: https://ollama.com/library/llava-llama3
llava (alternate vision model): https://ollama.com/library/llava
描述:
训练词语:
名称: fluxWithLLM_fluxWithLLMV10.zip
大小 (KB): 13
类型: Archive
Pickle 扫描结果: Success
Pickle 扫描信息: No Pickle imports
病毒扫描结果: Success