I have more ideas for Stable Diffusion. My nights and weekends are consumed! This time: For inpainting, why create a mask image manually, when A.I. can automatically build a mask from a text prompt? Someone much smarter has already published a paper (arXiv:2112.10003 [cs.CV]), with source code, to do just this!
More Stable Diffusion! This time attempting to add inpainting / masking based on my previous code, to merge both
img2img.py capabilities, disregarding the out-of-box
inpainting.py code, which does not have parameters for positive or negative prompts. Keyword being attempting...
I’ve been playing around with the Stable Diffusion scripts a little (to be exact, Ben Firshman’s version). To help me understand the script, I decided to re-write it the way I prefer to use it... either breaking or optimizing it in the process :P
Following from my previous post, AI-generated images with Stable Diffusion on an M1 mac: This time, using the image-to-image script, which takes an input “seed” image, in addition to the text prompt as inputs. In this case the model will use the shapes and colors in the input image as a base for the output AI-generated image.
There has been a lot of buzz about Stable Diffusion for text-to-image synthesis, which saw its Public Release around 22 Aug 22. You can read more on the Stability.AI blog and try it at Hugging Face. What’s groundbreaking is is that is open source, with a pre-trained downloadable model and modest system requirements, so anyone can try it on their own computer... anyone... like me!
Almost a year ago, Tencent researchers released their GFPGAN Face Restoration, an AI model which is trained specifically on faces, to better upscale and restore details in low-resolution or damaged portrait photos. I thought I’d give it a whirl.