-
Notifications
You must be signed in to change notification settings - Fork 26
Description
Hello!
This is quite random and I would have posted that into a Discussion Q&A instead, but those are disabled - so I am posting it as a ticket... apologies in advance.
I have been using ComfyUi for a little while, but my biggest problem is the absolute noodle-salad that are the various connectors between nodes. This workflow here (using the image-a variant) is awesome and the kinda workflow I would love to build myself to pair with OpenWebUI in the future as I work towards building my own AI server. It's also ridiculously fast.
Where or how did you pick up the various bits and bops of information to make this workflow in particular? Although the UI completely breaks my vision nerves, I still want to understand the actual structure and, heh, "workflow" behind those a little more to build one most suitable for my own setup (which is to be a pipeline within OpenWebUI to mimic ChatGPT's "Image" feature; using plain text to describe an image and then use a chain of models and tools to figure out what styles to use, and thus what to add to the user's input prompt and produce proper input for the T2I model).
Would be very happy if you could share your resources in this regard. :)
Kind regards and many thanks for this awesome workflow,
Ingwie