Convert animated WebP to WebM movie with Python
In my last post I mentioned converting an animated WebP image format into a WebM movie format. This post expands on how I did it.
learnings
In my last post I mentioned converting an animated WebP image format into a WebM movie format. This post expands on how I did it.
Happy new year!
Ever needed to extract images from PDFs and found both on-line and off-line tools lacking? Well, I certainly have, and here I present my Python code to extract JPGs/PNGs from PDFs, using PyMuPDF.
I sometimes test randomly downloaded code on macOS in a Sandbox that has limited and network access. I posted about this way back in Jan 2017, Creating a macOS Sandbox to run Kodi, and this is a short refresher for me...
I recently wanted to create an animated PNG, but macOS does not include any built in tools to combine multiple PNGs into an APNG file. Here’s an option using Python source code.
When I export photos or videos from Photos.app, I want the file’s Created Date (Created Date in Finder) to be the time the photo or video was taken. Alas, this is not the way Photos works, and setting the date turned out to be more challenging than expected.
Epic Diffusion recently came to my attention, a high-quality merge of various models by John Slegers: “Epîc Diffusion is a general purpose model based on Stable Diffusion 1.x intended to replace the official SD releases as your default model. It is focused on providing high quality output in a wide range of different styles...” Figured I’d give it a spin.
Recently (around 14 December 2022), Apple’s Machine Learning Research team published “Stable Diffusion with Core ML on Apple Silicon” with Python and Swift source code optimized for Apple Silicon (M1/M2) on Github apple/ml-stable-diffusion. Here I’m trying it out on a MacBook (though the code also works on iPhones and iPads)...
I refactored my previous Stable Diffusion code, to clean up, OO it a little, and add new features like tiling, upscaling, PNG metadata. As I mentioned before, I don’t understand AI/ML... but I do understand programming! So here is my new, more elegant Simple-SD v1.0 Python script.
I have more ideas for Stable Diffusion. My nights and weekends are consumed! This time: For inpainting, why create a mask image manually, when A.I. can automatically build a mask from a text prompt? Someone much smarter has already published a paper (arXiv:2112.10003 [cs.CV]), with source code, to do just this!
More Stable Diffusion! This time attempting to add inpainting / masking based on my previous code, to merge both txt2img.py
and img2img.py
capabilities, disregarding the out-of-box inpainting.py
code, which does not have parameters for positive or negative prompts. Keyword being attempting...
I’ve been playing around with the Stable Diffusion scripts a little (to be exact, Ben Firshman’s version). To help me understand the script, I decided to re-write it the way I prefer to use it... either breaking or optimizing it in the process :P
Following from my previous post, AI-generated images with Stable Diffusion on an M1 mac: This time, using the image-to-image script, which takes an input “seed” image, in addition to the text prompt as inputs. In this case the model will use the shapes and colors in the input image as a base for the output AI-generated image.
There has been a lot of buzz about Stable Diffusion for text-to-image synthesis, which saw its Public Release around 22 Aug 22. You can read more on the Stability.AI blog and try it at Hugging Face. What’s groundbreaking is is that is open source, with a pre-trained downloadable model and modest system requirements, so anyone can try it on their own computer... anyone... like me!
Almost a year ago, Tencent researchers released their GFPGAN Face Restoration, an AI model which is trained specifically on faces, to better upscale and restore details in low-resolution or damaged portrait photos. I thought I’d give it a whirl.
I recently wanted to migrate from a Bash 3 shell script to Python 3. This is nothing but a brain dump comparing a few bits and bobs.