Testing new PAG and Perp-Neg nodes in ComfyUI
I know it’s bad form to start off with a disclaimer: but the truth is, I do not know what I am doing. I am just testing out two new ComfyUI nodes, PerturbedAttentionGuidance and PerpNegGuider.
learnings
I know it’s bad form to start off with a disclaimer: but the truth is, I do not know what I am doing. I am just testing out two new ComfyUI nodes, PerturbedAttentionGuidance and PerpNegGuider.
In my last post, I used ComfyUI-IF_AI_tools to integrate to the brxce/stable-diffusion-prompt-generator model running in Ollama. I wonder if I could use the base Mistral 7B model to help improve my uncreative prompts instead...
In my last post, I described running Mistral, a Large Language Model, locally using Ollama. To accompany that piece, I created a prompt and manually used AI to generate an image. Today, I’ll wire up a ComfyUI workflow to Ollama to do this seamlessly, thanks to ComfyUI-IF_AI_tools.
I keep posting about Stable Diffusion, but I do experiment with Large Language Models too! I do not have much to contribute in this regard, instead, here is the transcript of a game I played with the open source Mistral 7B model via Ollama.
More and more AI generated images are shared as short video clips. So, here a quick test of Stable Video Diffusion - which was released back in November last year. Don’t know why I didn’t post this when I posted about AnimateDiff and the Hotshot Motion model around the same time.
Do you want to convert a 2D image into a 3D model auto-magically? On 5 March 2024, Stability AI and Tripo AI released TripoSR: Fast 3D Object Generation from Single Images that does exactly that!
Differential Diffusion is the newest method (framework) of in-painting without an in-painting model. Instead, all that is needed is a mask (map) where the lighter the area, the greater the re-painting applied.
Ever wished you could generate Stable Diffusion XL images with transparent backgrounds? Well, your wish has been answered by the smart people behind the Transparent Image Layer Diffusion using Latent Transparency paper. They have made their code and models available, and what do you know, Chenlei Hu has ported it to ComfyUI!
With the advent of techniques like Adversarial Diffusion Distillation and Latent Consistency models, A.I. image synthesis based on Stable Diffusion XL has been getting faster and faster. Here is just quick comparison of a few models at 4-steps, some of which are fine-tuned and trained for realism.
Not long ago, in a attempt to obtain Consistent portraits using IP-Adapters for SDXL, I shared a comparison between IP-Adapter-Plus-Face and IP-Adapter-FaceID. Today I’ll look at InstantID.
An update to my previous post on Stable Cascade with ComfyUI - instead of requiring four separate model files, we now only need two checkpoints, and the ComfyUI workflow is now very straightfoward!
On 12 Feb 2024, Stability.ai released Stable Cascade “research preview” (non-commercial license), and over the weekend, ComfyUI was updated to support this new model! Time to give it a go!
As a follow up to my last post regarding Consistent portraits using IP-Adapters for SDXL, this is a short comparison of the two face IP-Adapters for SDXL by h94 / xiaohu: namely, ip-adapter-plus-face_sdxl_vit-h.bin and ip-adapter-faceid_sdxl.bin.
Getting consistent character portraits generated by SDXL has been a challenge... until now! ComfyUI IPAdapter Plus (dated 30 Dec 2023) now supports both IP-Adapter and IP-Adapter-FaceID (released 4 Jan 2024)!
Time to try another ControlNet for Stable Diffusion XL - QR Code Monster v1 in ComfyUI. This ControlNet can influence SDXL such that the generated image “hides” a scan-able QR code, which at first glance, looks like a photo!
I never tried generating video clips or animations with SDXL before, simply because my GPU is not powerful enough. But after testing out the LCM LoRA for SDXL yesterday, I thought I’d try the SDXL LCM LoRA with Hotshot-XL, which is something akin to AnimateDiff.
Stable Diffusion keeps improving at an astounding pace! This time, it’s the idea of distilling a model into a Latent Consistency Model (LCM) for very, very fast image generation with a quality trade-off. On 24 Oct 2023, the distilled Segmind Stable Diffusion 1B (SSD-1B) model was released, followed by a better implementation in the form of Latent Consistency LoRAs for SDXL and SDD-1B released on 9 Nov 2023.
Stability AI just released an new SD-XL Inpainting 0.1 model. Here is how to use it with ComfyUI.
Yesterday, I tried out Stability AI’s four Control-LoRAs but mentioned that I did not understand the output of the Revision “image-mixing” workflow. I’ve since done a bit more experimentation...
Less than a week after my post testing diffusers/controlnet-canny-sdxl-1.0, along comes Stability AI’s own ControlNets, which they call Control-LoRAs! Not one but 4 of them - Canny, Depth, Recolor and Sketch models!