Recently I tried Fooocus by Lyumin Zhang (Illyasviel) which fulfills its promise to allow one to “Focus on prompting and generating” - it is certainly easy to use! But shortly after its release, someone has “ported” the code to ComfyUI as a Custom Node! So of course it’s time to test it out...
Get caught up:
Part 1: Stable Diffusion SDXL 1.0 with ComfyUI
Part 2: SDXL with Offset Example LoRA in ComfyUI for Windows
Part 3: CLIPSeg with SDXL in ComfyUI
Part 4: Two Text Prompts (Text Encoders) in SDXL 1.0
Part 5: Scale and Composite Latents with SDXL
Part 6: SDXL 1.0 with SDXL-ControlNet: Canny
Part 7: This post!
About Fooocus
Among the many enhancements, Fooocus implements a “native refiner swap inside one single k-sampler... Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup.” Compare this to using two sampler nodes in ComfyUI.
Install ComfyUI_Fooocus_KSampler
I noticed a post by GerardP19 on Reddit, entitled “My new workflow utilizing the Foocus sampler in Comfy, with the styler and nested nodes. ” but he did not provide a link to his GitHub repo, instead requiring ComfyUI-Manager to install.
Alternatively:
- Download ComfyUI_Fooocus_KSampler.
- Unzip and place the folder in
ComfyUI\custom_nodes
. - Start (or re-start) ComfyUI.
- Create a workflow to test...
At time of writing, there is bug with the Fooocus KSampler extension which may be fixed by the time you read this! If so, ignore the following:
Error! Cannot import ...\ComfyUI\custom_nodes\ComfyUI_Fooocus_KSampler-main module for custom nodes: cannot import name 'load_additional_models' from 'comfy.sample'
- Edit the file
ComfyUI_Fooocus_KSampler-main\sampler\Fooocus\core.py
- Search for
load_additional_models
. - Compare with similar function in ComfyUI
get_additional_models
. - Search for all instances of
load_additional_models
to replace withget_additional_models
:- replace in
from comfy.sample import prepare_mask, broadcast_cond, load_additional_models, cleanup_additional_models
- replace two instances of
models = load_additional_models(positive, negative, model.model_dtype())
withmodels = get_additional_models(positive, negative)
- replace in
- Re-start ComfyUI and re-test.
I encountered this error with my specific computer setup, but you may not!
Error! RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate ###### bytes.
I encountered this previously, but I just worked around it by splitting my workflow into two discrete steps (Base and Refiner), though sometimes re-running the workflow managed to squeeze everything in memory since the nodes cache outputs. Just wrongly assumed I did not have enough VRAM and mentioned this in a previous post, but that’s not the root cause!
I now realize it’s because I have too little system RAM, only 16GB! One solution is to buy more... but for now, increasing the Windows virtual memory paging file size allows me to continue:
- From the Start Menu, type
View advanced system settings
. - In the System Properties panel, navigate to the Advanced tab > Performance Settings...
- In the Performance Options panel, again navigate to the Advanced tab
- Under Virtual Memory, click Change...
- Disable Automatically manage paging file size for all drive and manually specify a Custom size: I set a range from
18000
MB to30000
GB - Click Set to save, then back out of the panels by clicking the OK buttons.
No-Code Workflow
At minimum:
- A CheckpointLoaderSimple node to load SDXL Base.
- A CLIPTextEncodeSDXL and a CLIPTextEncode node for the
positive
andnegative
prompts. - A CheckpointLoaderSimple node to load SDXL Refiner.
- A CLIPTextEncodeSDXLRefiner and a CLIPTextEncode for the
refiner_positive
andrefiner_negative
prompts respectively. - A EmptyLatentImage specifying the image size consistent with the previous CLIP nodes.
- Wire up everything required to a single KSampler With Refiner (Fooocus) node - this is so much neater!
- And finally, wire up the latent output to a VAEDecode node followed by a SameImage node, as usual.
For the curious, prompt credit goes to masslevel who shared “Some of my SDXL experiments with prompts” on Reddit. And as a teaser... coming up, SDXL ControlNet for OpenPose (v2) !