Comfyui workflow png reddit

Comfyui workflow png reddit. Subreddit Dedicated to Foxgirls, Dragons, Felines and any other sexy Hentai or Furry Girl you have! Whether they're Anthropomorphic or Not. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and I'm not sure which specifics are you asking about but I use ComfyUI for the GUI and use a custom workflow combining controlnet inputs and multiple hiresfix steps. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. A lot of people are just discovering this technology, and want to show off what they created. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . Just the workflow including the wildcard prompt, but not what the random prompt generated. Again I got the difference between the images and increased the contrast. Collaborator. I dump the metadata for a png I really like: magick identify -verbose . It'll create the workflow for you. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. A quick question for people with more experience with ComfyUI than me. View community ranking In the Top 10% of largest communities on Reddit. Flux Schnell is a distilled 4 step model. 0 | all workflows use base + refiner Not sure if my approach is correct or sound, but if you go to my other post - the one on just getting started- and download the png and throw it into ComfyUi you’ll see the node setup I sort of cobbled together. I had to place the image into a zip, because people have told me that Reddit strips . I'm revising the workflow below to include a non-latent option. 1 for ComfyUI | now with LoRA, HiresFix, and better image quality | workflows for txt2img, img2img, and inpainting with SDXL 1. 0 and refiner and installs ComfyUI Welcome to the unofficial ComfyUI subreddit. Reply reply Dry-Comparison-2198 Getting an issue where whatever I generate - a bogus workflow I used a few days ago is saving … and when I try to load the png - it brings up wrong workflow - and fails to render anything if I hit queue. The workflow joson info is saved with the . Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now I spent around 15 hours playing around with Fooocus and ComfyUI, and I can't even say that I've started to understand the basics. Im trying to do the same as high res fix, with a model and weight below 0. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. But let me know if you need help replicating some of the concepts in my process. I would like to use that in-tandem with with existing workflow I have that uses QR Code Monster that animates traversal of the portal. . My only current issue is as follows. Layer copy & paste this PNG on top of the original in your go to image editing software. Comparisons and discussions across different platforms are encouraged. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by frame-labled filename etc). These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. You can save the workflow as a json file with the queue control panel "save" workflow button. 43 votes, 16 comments. ComfyUI is a completely different conceptual approach to generative art. Anyone ever deal with this? This missing metadata can include important workflow information, particularly when using Stable Diffusion or ComfyUI. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. Searge SDXL Update v2. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. Anywhere. png. I compared the 0. Here you can see random noise that is concentrated around the edges of the objects in the image. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. There is no version of the generated prompt. Belittling their efforts will get you banned. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. You can use the remote. Mar 31, 2023 · You signed in with another tab or window. 8). The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. png Simply load / drag the png into comfyUI and it will load the workflow. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. More to come. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. This should import the complete workflow you have used, even including not-used nodes. Mar 30, 2023 · edited. If you need help just let me know. Please keep posted images SFW. If you really want the json, you can save it after loading the png into comfyui. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. 0 and refiner and installs ComfyUI First of all, sorry if this has been covered before, i did search and nothing came back. I tried to find either of those two examples, but I have so many damn images I couldn't find them. If necessary, updates of the workflow will be made available on Github. true. This topic aims to answer what I believe would be the first questions an a1111 user might have about Comfy. SDXL 1. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. Loading a PNG to see its workflow is a lifesaver to start understanding the workflow GUI, but it's not nearly enough. Please share your tips, tricks, and… Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. I use a google colab VM to run Comfyui. json files into an executable Python script that can run without launching the ComfyUI server. It is not much an inconvenience when I'm at my main PC. Welcome to the unofficial ComfyUI subreddit. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can use () to change emphasis of a word or phrase like: (good code:1. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. Please share your tips, tricks, and workflows for using this software to create your AI art. However, this can be clarified by reloading the workflow or by asking questions. My workflow where you can choose and image (or several) from the batch and upscale them on the Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. 4K subscribers in the aiyiff community. I'm currently running into certain prompts where latent just looks awful. it and the same way you could port forward the comfyui. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. 5 from 512x512 to 2048x2048. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. You switched accounts on another tab or window. If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until I've figured it out. Not a specialist, just a knowledgeable beginner. Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. 9 and 1. pngs of metadata. Instead, I created a simplified 2048X2048 workflow. So every time I reconnect I have to load a presaved workflow to continue where I started. 2) or (bad code:0. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). 15 votes, 14 comments. You signed out in another tab or window. PNG into ComfyUI. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. This works on all images generated by ComfyUI, unless the image was converted to a different format like jpg or webp. 19K subscribers in the comfyui community. I can load default and just render that jar again … but it still saves the wrong workflow. Second, if you're using ComfyUI, the SD XL invisible watermark is not applied. The complete workflow you have used to create a image is also saved in the files metadatas. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. It works by converting your workflow. Jul 28, 2024 · Actually there is better way to access your computer and comfyui. I generated images from comfyUI. The png files produced by ComfyUI contain all the workflow info. Share, discover, & run thousands of ComfyUI workflows. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. \ComfyUI_01556_. I've mostly played around with photorealistic stuff and can make some pretty faces, but whenever I try to put a pretty face on a body in a pose or a situation, I I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. and spit it out in some shape or form. Reload to refresh your session. Just started with ComfyUI and really love the drag and drop workflow feature. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. When I save my final PNG image out of ComfyUI, it automatically includes my ComfyUI data/prompts, etc, so that any image made from it, when dragged back into Comfy, sets ComfyUI back up with all the prompts, and data just like the moment I originally created the original image. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. This is a subreddit for the discussion, and posting, of AI generated furry content. You can then load or drag the following image in ComfyUI to get the workflow: I'll do you one better, and send you a png you can directly load into Comfy. This makes it potentially very convenient to share workflows with other. And above all, BE NICE. You can simply open that image in comfyui or simply drag and drop it onto your workflow canvas. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. However, I may be starting to grasp the interface. To access your computer you can use the windows remote desktop and forward the tcp port using https://remote. it to port forward Up To five ports on free plan. The test image was a crystal in a glass jar. If you see a few red boxes, be sure to read the Questions section on the page. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Save the new image. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Oh crap. The Solution To tackle this issue, with ChatGPT's help, I developed a Python-based solution that injects the metadata into the Photoshop file (PNG). it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) The image itself was supposed to be the workflow png but I heard reddit is stripping the meta data from it. 8. Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. (vid2vid made with ComfyUI AnimateDiff workflow The workflow joson info is saved with the . 0 VAEs in ComfyUI. Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. 0 and refiner and installs ComfyUI Just started with ComfyUI and really love the drag and drop workflow feature. Comfy Workflows Comfy Workflows. An example of the images you can generate with this workflow: Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. This includes yiff… A transparent PNG in the original size with only the newly inpainted part will be generated. -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. Please DO NOT post any Feral, IRL Selfies, Self Made Art (Unless Permisson is Granted) Porn links, or Random spam. Save one of the images and drag and drop onto the ComfyUI interface. nzh nonr yfes blocpde iauomb ambvi ofhpsa ojomg ojv xdhj