Upscale comfyui reddit






















Upscale comfyui reddit. I added a switch toggle for the group on the right. If it’s a close up then fix the face first. Try immediately VAEDecode after latent upscale to see what I mean. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). It depends what you are looking for. Please keep posted images SFW. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp results. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. 5 to get a 1024x1024 final image (512 *4*0. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. 0. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 20K subscribers in the comfyui community. It uses CN tile with ult SD upscale. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters Welcome to the unofficial ComfyUI subreddit. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 Tried the llite custom nodes with lllite models and impressed. Please share your tips, tricks, and… Hi, there I am use UItimate SD Upscale but it just doing same process again and again blow is the console code - hoping to get some help Upscaling iteration 1 with scale factor 2 Tile size: 768x768 Tiles amount: 6 Grid: 2x3 Redraw enabled: True Seams fix mode: NONE Requested to load AutoencoderKL Loading 1 new model Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. Latent quality is better but the final image deviates significantly from the initial generation. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. I liked the ability in MJ, to choose an image from the batch and upscale just that image. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. No matter what, UPSCAYL is a speed demon in comparison. This is done after the refined image is upscaled and encoded into a latent. Imagine it gets to the point that temporal consistency is solid enough, and generation time is fast enough, that you can play & upscale games or footage in real-time to this level of fidelity. Ugh. The resolution is okay, but if possible I would like to get something better. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. You end up with images anyway after ksampling so you can use those upscale node. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with a 2nd ksampler at a denoise strength of 0. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. This means that your prompt (a. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb 43 votes, 16 comments. Girl with flowers. Good for depth, open pose so far so good. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. Belittling their efforts will get you banned. And when purely upscaling, the best upscaler is called LDSR. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. thats Grab the image from your file folder, drag it onto the entire ComfyUI window. 17K subscribers in the comfyui community. Because the upscale model of choice can only output 4x image and they want 2x. The downside is that it takes a very long time. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. Please share your tips, tricks, and workflows for using this… After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Please share your tips, tricks, and… We would like to show you a description here but the site won’t allow us. 72 votes, 65 comments. Vase Lichen. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. I upscaled it to a… 17K subscribers in the comfyui community. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. k. And above all, BE NICE. 2x, upscale using a 4x model (e. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. save. Instead, I use Tiled KSampler with 0. Makeing a bit of progress this week in ComfyUI. It depends on how large the face in your original composition is. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. Solution: click the node that calls the upscale model and pick one. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions Welcome to the unofficial ComfyUI subreddit. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Please share your tips, tricks, and workflows for using this software to create your AI art. 2 and resampling faces 0. 9 , euler Upscale x1. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. 10 votes, 18 comments. It added nothing. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. articles on new photogrammetry software or techniques. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. This will allow detail to be built in during the upscale. report. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale factor = 2. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Please share your tips, tricks, and… I have been generally pleased with the results I get from simply using additional samplers. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. 1’s 200,000 GPU hours. ultrasharp), then downscale. I generate an image that I like then mute the first ksampler, unmute Ult. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. But I probably wouldn't upscale by 4x at all if fidelity is important. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. 19K subscribers in the comfyui community. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. 5 if you want to divide by 2) after upscaling by a model. I have a custom image resizer that ensures the input image matches the output dimensions. 5, euler, sgm_uniform or CNet strength 0. The final node is where comfyui take those images and turn it into a video. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. - image upscale is less detailed, but more faithful to the image you upscale. You just have to use the node "upscale by" using bicubic method and a fractional value (0. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). I've struggled with Hires. Does anyone have any suggestions, would it be better to do an ite I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y New to Comfyui, so not an expert. 2 Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. 6 denoise and either: Cnet strength 0. Depending on the noise and strength it end up treating each square as an individual image. Jul 23, 2024 · The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. The best method I (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. After borrowing many ideas, and learning ComfyUI. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. Adding LORAs in my next iteration. We would like to show you a description here but the site won’t allow us. Latent upscale is different from pixel upscale. 2 options here. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. This is a community to share and discuss 3D photogrammetry modeling. g. The upscale quality is mediocre to say the least. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. 55 Here is a workflow that I use currently with Ultimate SD Upscale. Thanks! then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. 1-0. This is the 'latent chooser' node - it works but is slightly unreliable. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. I created a workflow with comfy for upscaling images. It's why you need at least 0. This results is the same as with the newest Topaz. 0 = 0. There are also "face detailer" workflows for faces specifically. 5 denoise. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. That's because latent upscale turns the base image into noise (blur). If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. A lot of people are just discovering this technology, and want to show off what they created. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. That’s a cost of abou Welcome to the unofficial ComfyUI subreddit. . 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. I'm trying to find a way of upscaling the SD video up from its 1024x576. Hello! I am hoping to find find a ComfyUI workflow that allows me to use Tiled Diffusion + Controlnet Tile for upscaling images~ can anyone point me toward a comfy workflow that does a good job of this? A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. 5 "Upscaling with model" and then denoising 0. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. Still working on the the whole thing but I got the idea down Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. Look at this workflow : The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. 24K subscribers in the comfyui community. Search for upscale and click on Install for the models you want. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 Alpha + SD XL Refiner 1. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. I find if it's below 0. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. It will replicate the image's workflow and seed. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Also, both have a denoise value that drastically changes the result. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. a. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. SD upscaler and upscale from that. This is not the case. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. So if you want 2. 2 / 4. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Thanks. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. 21K subscribers in the comfyui community. embed. 10 votes, 15 comments. Both these are of similar speed. 40. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Hi! I was wondering of someone could help me upscale a photo to 8k? Or super high resolution. A step-by-step guide to mastering image quality. Hires. I've played around with different upscale models in both applications as well as settings. My fiancé was killed by a drunk driver, the photo is a mugshot of the individual that I want to use in a big sign but it needs to be upscaled so it doesn’t lose qualit. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and smeary with Welcome to the unofficial ComfyUI subreddit. Thanks for all your comments. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. 44 votes, 21 comments. Upscale and then fix will work better here. I gave up on latent upscale. That said, Upscayl is SIGNIFICANTLY faster for me. Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. 9, end_percent 0. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. 5=1024). bvvyet lmzm djmh nnozl stn qrehd txrduz iurw zciu eohcv