Add Huggingface manual + Macbook GPU/CPU Compatibility#52
Add Huggingface manual + Macbook GPU/CPU Compatibility#52jannismoore wants to merge 1 commit intoSesameAILabs:mainfrom
Conversation
|
I think the repo currently does not work with macos (even with the changes mentioned here). Even after installing bitsandbytes and torch for macos apple sillicon, It seems to expect triton (which I don't believe is available on macos). Additionally, it seems there is an issue in silentcipher/server.py (used in encode_wav) about trying to convert MPS Tensor to float64 dtype as the MPS framework doesn't support float64. please use float32 instead. I think it is not as easy as swapping out the device type for mps |
|
|
||
| model_path = hf_hub_download(repo_id="sesame/csm-1b", filename="ckpt.pt") | ||
| generator = load_csm_1b(model_path, "cuda") | ||
| generator = load_csm_1b(model_path, "cuda") # Use "mps" for Apple Silicon or "cpu" for Intel MacBooks |
There was a problem hiding this comment.
I don't think this will work
There was a problem hiding this comment.
Hmm actually I might be wrong. Seems like it should work if we set NO_TORCH_COMPILE=True to disable triton
Thanks, Kyle, you're right. I've been running into the same situation. |
|
Triton can be disabled with this env var Mimi compiles lazily at runtime which requires triton. This didn't provide any speed up right now so it was turned off to simplify requirements. |
|
Hey just wanted to hop in here and post my hacky work around was to force my MacBook (apple silicon) to use cpu. It will still run into the silentcipher/server.py encode wav error mentioned before but if you replace line 317 with Not the most elegant solution but worked for testing things :) |
|
@lewiswatson55 literally we figured out the same hack! |
|
Thanks for finding this It has been merged into our silentcipher fork. I've tested on linux cpu/gpu, but not on a macbook |


This aims to simplify the entry for others.