Software: GIMP, Stable Diffusion (WebUI Forge).
Hardware: NVIDIA GeForce GTX 1660 (6GB RAM).
Goal: Create fully controllable by artist workflow of cooperation between hand painting and using AI to speed up the process and improve the result.
The idea is to start with very basic sketch in GIMP. Then use this sketch to generate basic rough image-to-image result in SD without breaking the idea of the sketch. Then open this rough result in GIMP to add and correct some details by hand. Then push the improved in GIMP drawing through the image-to-image generation in SD again with subsequent improvement of that generation by hand in GIMP, and so on. Basically, use as many steps as needed to get the result with details and features I’ve planned, not a random result which gone out of control.
Here is what I’ve got during the first experiment:
Step 1 and 2:
Sampling method - DPM++2M Karras
Sampling steps - 40
Scale - 1
CFG Scale - 7
Denoising strength - 0.4
Seed - 3
Step 3:
Sampling method - DPM++2M Karras
Sampling steps - 40
Scale - 1
CFG Scale - 7
Denoising strength - 0.3
Seed - 7
Step 4:
Sampling method - DPM++2M Karras
Sampling steps - 40
Scale - 2
CFG Scale - 7
Denoising strength - 0.4
Seed - 3
I wish I could improve the method to get fully photorealistic drawings, but currently I’m not sure how to achieve that. Also, I need to try out the method on drawing different objects, not just human character concepts.