Replies: 4 comments 2 replies
-
|
Looks about right to me. What I do instead is use the fastwan finetune. It's not only faster (distilled) but also looks way better. https://huggingface.co/Green-Sky/FastWan2.2-TI2V-5B-FullAttn-GGUF also feel free to try the tiny vae decoder to save vram #937
Not used by 5B. Also try |
Beta Was this translation helpful? Give feedback.
-
|
Or try Here is what I can manage on 8gig vram in 30sec:
fastwan2.2-t2i5b_tae_3steps_euler+lcm.mp4 |
Beta Was this translation helpful? Give feedback.
-
|
Thank you both of you. Now seems it starts to work. |
Beta Was this translation helpful? Give feedback.
-
|
I have plan to move into img2img generation that's why i use dual model. I wonder is there some Wan 2.2 that makes even better quality, but in same time fits into my 12GB VRAM mostly(rtx 4070). I would not mind double encode time, juts not a slow as snail. if there is some i would be grateful for exact model combination that do not fail. Sys ram is 64GB. I am beginner in this local AI stuff, but i chose stable-diffusion.cpp as it is better and more compact and lighter and .cmd scripts are good for me personally. I do not like those "fat" Python programs. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Is it my mistake or something other wrong, i even cranked steps up and added extra options, but still garbage video comes out. Text to image mode tested, i have not tested img-to-video mode.
test output.zip
WAN22 cmd.zip
Beta Was this translation helpful? Give feedback.
All reactions