This project is mirrored from https://github.com/meta-llama/llama-recipes.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
- Nov 26, 2024
- Nov 21, 2024
-
-
Zain Hasan authored
-
Zain Hasan authored
-
Zain Hasan authored
-
Zain Hasan authored
-
- Nov 01, 2024
- Oct 31, 2024
-
-
Zain Hasan authored
-
Zain Hasan authored
-
Zain Hasan authored
-
- Oct 21, 2024
-
-
Kai Wu authored
-
- Oct 08, 2024
- Sep 03, 2024
-
-
Eda Z authored
-
- Jul 29, 2024
-
-
Chester Hu authored
Updated the endpoint to 3.1 support. Also updated Langchain and Gradio support as their framework updated.
-
- Jul 22, 2024
-
-
Matthias Reso authored
-
Matthias Reso authored
-
- Jul 19, 2024
-
-
Matthias Reso authored
-
- Jul 18, 2024
-
-
Suraj Subramanian authored
-
Suraj authored
-
Matthias Reso authored
Enable pipeline parallelism through use of AsyncLLMEngine in vllm inferecen + enable use of lora adapter
-
- Jul 11, 2024
-
-
Jeff Tang authored
-
- Jul 10, 2024
- Jul 05, 2024
- Jul 03, 2024
- Jul 01, 2024
-
-
Suraj Subramanian authored
-
Suraj Subramanian authored
-
Suraj Subramanian authored
-