When using AIMV2 as the encoder, unfreezing it and setting the learning rate to 2e-6 leads to the LLaVA-NEXT model reaching a loss of 0 after 3000-4000 steps. The original paper kept the encoder frozen. Why is it not recommended to unfreeze it for training? If I decide to unfreeze it, what learning rate should I set?