What's your workflow to generate images that come out close to what you've imagined?
What's your workflow to generate images that come out close to what you've imagined?
I have been toying around with Stable Diffusion for some time now and have been able to get out great images.
However, as I dive deeper I want to really get images that match as closely to what I imagine and I'm kinda struggling to get there.
For now, I work with Control Net and Inpainting, which help a lot, but I have yet to produce images I'm really satisfied with.
How's your workflow when composing specific images? Do you complement it with Photoshop (or similar)?
You might find this useful, allows for regional prompting: https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111
Main thing I do is just lots of inpainting passes. Sketch things out in another program first if I'm adding or subtracting something big.
2 0 ReplyFor composition I use Semantic Segmentation ControlNet - Sketch loosely, then inpaint or more ControlNet. Of course I use GIMP or any other tool to finetune the image or to "force the model's hand" a little during inpainting.
1 0 Reply