Replies: 14 comments 9 replies
-
|
3bQ8: 7bQ8: 7bQ5: Question: How come the smaller model with fewer tensors produces higher quality images than the others? |
Beta Was this translation helpful? Give feedback.
-
|
Hi @naxci1 - Are these results still true when using the non GGUF version ? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
I think the best ones are 3bfp16 and 3bQ8 models. The 7b models, although larger, never gave me good results. |
Beta Was this translation helpful? Give feedback.
-
test.mp4Using seedvr2_ema_7b_sharp_fp16.safetensors. 640x360 base video resolution, batch 125 frames x4 multiplied resolution. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @Fenixdg3duy NVIDIA RTX PRO 6000 Blackwell 96GB - are you using this on the cloud? Which site are you using it on and is there a ready-made template? |
Beta Was this translation helpful? Give feedback.
-
|
Same for me. 3B models are crisp and sharp, the 7B models are blurry. And the 7B_sharp models are everything else than sharp. |
Beta Was this translation helpful? Give feedback.
-
|
I think of it in terms of cfg. The higher the cfg the more the output will look like the input, and a lower cfg gives the model room to be more creative. A lower cfg can be useful if an input is really low quality or needs restoration. While a high quality input, where there are fine details that you would like to preserve would call for a higher cfg. I would say the 3b model has the lowest cfg, then the 7b_sharp model sits in the middle, with the 7b model having the highest cfg. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
@Junchien999 I would like you to share your configuration, as I can't seem to achieve the level of detail I see in the image, no matter what I try. The accessories and hair look great. Can you share your .json file? I'm dying of curiosity. |
Beta Was this translation helpful? Give feedback.
-
|
@naxci1, have you changed your workflow in the meantime? Do you still mostly use the 3bq8 GGUF model? I noticed your repo-fork for flashvsr, and it works but unfortunately the results aren't as good as with seedvr2. Can you tell me what the best settings are for the torch compile node? I have all the necessary libraries, I'm wondering if it only compiles once and does it always have to be in the workflow for the model to stay optimized when loading? Also, does torch compile work for GGUF or only for PyTorch models? Thanks |
Beta Was this translation helpful? Give feedback.
-
|
Hi @zelenooki87 I'm currently using this workflow; it's easy to do both automatic and manual input. I always use the 3bQ8 model, as I get the best quality with it. My videos are in extremely poor condition; others don't give good results. Flashvsr isn't very high quality, so I improved it to a new version, both for experience and to add new VAE models. For videos in poor condition, the SeedVR2 model is undeniably the world's number one, but if the videos are in medium or good condition, Flashvsr is possible. It's actually faster and gives better results than Topaz. Its training model isn't good, that's why it doesn't give quality output. The SeedVR2 training model is really good, even better than Topaz SLM and SLS. The only problem is that the VAE is slow, but they've created such a DNA that you can't add other VAEs. I spent weeks trying to add many fast VAEs, but because the DNA didn't match, they either didn't work or gave poor quality output. But with Flashvsr, I added 5 different VAE models in one day, and they all worked very well. We're expecting the SeedVR3 model this year; the developer promised to release the new version this year, and we're waiting for it. It would be great if VAE model selection was also included. The DIT model processes quickly; the only slow part is the VAE decode section. I tried very hard with Torch and SA3, they worked, but I gave up because the effect wasn't good. They work well on Linux, but I couldn't get them to work quickly on Windows. Torch works on all of them, but you need Linux. If you have Linux, there are tutorial videos on YouTube for installation.
|
Beta Was this translation helpful? Give feedback.
-
|
Hi @skv89 It's better to disable I'm not disabling If we store the models in VRAM, i.e., select |
Beta Was this translation helpful? Give feedback.
-
|
Hi @skv89 The link below contains all the ready-made Windows AI Wheels; just download and install the correct one according to your Python, PyTorch, and CUDA versions. You can get help from AI; installation is very simple. Currently, SA2 is the fastest on Windows; I installed SA3, but it doesn't support GPUs and requires an H100 to work. I've actually been working on Torch.compile for two days now; it worked, but compiling C++ is extremely slow on Windows, but it does it very quickly on Linux. I'm currently testing it, and I've spent two days on these tests. I still can't get the desired results in the Windows environment. Therefore, don't use it on Windows for now; use SA2. |
Beta Was this translation helpful? Give feedback.













Uh oh!
There was an error while loading. Please reload this page.
-
3bQ8 - 7bQ8 - 7bQ5
Beta Was this translation helpful? Give feedback.
All reactions