Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/meta-llama/llama-recipes. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
  1. Jan 15, 2025
  2. Jan 10, 2025
  3. Jan 09, 2025
  4. Jan 02, 2025
  5. Nov 19, 2024
  6. Nov 16, 2024
  7. Oct 24, 2024
    • celestinoalan's avatar
      Append epoch rather than best val. loss to val_loss · 2a94bfff
      celestinoalan authored
      **Problem**
      Currently, we're val_loss.append(best_val_loss) in each epoch. This is misleading because we're appending the corresponding epoch (not best across epochs) quantities in train_loss, train_prep, and val_prep. This is also inconvenient, as one often would like to plot both train and validation losses as a function of the epochs to look for overfitting.
      
      **Solution**
      val_loss.append(eval_epoch_loss)
      2a94bfff
  8. Oct 21, 2024
  9. Oct 18, 2024
  10. Oct 15, 2024
  11. Oct 14, 2024
  12. Oct 12, 2024
  13. Oct 11, 2024
  14. Oct 08, 2024
  15. Oct 02, 2024
  16. Sep 27, 2024
Loading