Skip to content

Update finetune_deepseekcoder.py#609

Open
naanrdk wants to merge 1 commit intodeepseek-ai:mainfrom
naanrdk:patch-1
Open

Update finetune_deepseekcoder.py#609
naanrdk wants to merge 1 commit intodeepseek-ai:mainfrom
naanrdk:patch-1

Conversation

@naanrdk
Copy link

@naanrdk naanrdk commented Jan 29, 2025

Using torch.float16 or torch.cuda.amp can significantly reduce memory usage and speed up training by performing computations with lower precision.

Using torch.float16 or torch.cuda.amp can significantly reduce memory usage and speed up training by performing computations with lower precision.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant