Differential Diffusion is the newest method (framework) of in-painting without an in-painting model. Instead, all that is needed is a mask (map) where the lighter the area, the greater the re-painting applied.


Update ComfyUI to get this merge, which was committed two days ago.


Here is my simplified and very standard workflow, following the example in the previous link.

  • I used SDXL Turbo to generate the source image (which must be 512x512) - but any SDXL model should work.
  • I created the black and white mask in paint.net, and used the ThresholdMask node to adjust its intensity. Of course you can use the LoadImage node, then right click the image, and Open in MaskEditor to create a 2-tone mask directly in ComfyUI - simpler, but cannot be used to create a gradient mask.
  • The source image and mask are passed to Differential Diffusion to in-paint over the white masked areas, in order to generate a new image according to the second positive prompt. This requires two new nodes (in purple):
    • The first node is DifferentialDiffusion, which (I think) adjusts the SDXL model, and
    • The second is InpaintModelConditioning, which (I guess) conditions the input prompts, image and mask for inference using the adjusted model. But what do I know?

ComfyUI Differential Diffusion workflow

As a comparison with previous methods, refer to my post on Using the SDXL-Inpainting 0.1 Model, or, going back even further, my Stable Diffusion script with inpainting mask... Also, you may be interested to try out CLIPSeg with SDXL to generate a mask from a text prompt, instead of having to create the mask by hand.