This project is mirrored from https://github.com/meta-llama/llama-recipes.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
- Mar 13, 2024
-
-
Suraj Subramanian authored
* Add new file org structure * Add new notebooks to quickstart on Mac and via HF * consolidate all images in a top-level folder * Update main README * Remove "news" section from main README * rename HF Trainer finetuning notebook and add detail to README Co-authored-by:
Navyata Bawa <bnavyata@fb.com> Co-authored-by:
Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
-
- Mar 11, 2024
-
-
Hamid Shojanazeri authored
-
- Mar 04, 2024
-
-
Jeff Tang authored
Updating the AWS Prompt_Engineering Notebook + Adding an Example of ReAct with Llama 2 on Bedrock (#386)
-
Eissa Jamil authored
Updated minor typos/fixes from reviewer comments
-
Eissa Jamil authored
quick whitepaper reference added to the INST Prompt Tags section
-
Eissa Jamil authored
+ making sure the attribution was added back in
-
Eissa Jamil authored
Short explanation of ReAct Setup and configuration to use Amazon Bedrock Example of using the Bedrock api via langchain Setup for use of DuckDuckGoSearchRun, WikipediaAPIWrapper, and PythonREPL. Created a pattern for the model to follow in order to use the tools and do reasoning similar to CoT. Cleaned up and formatted the generated text before giving it to the corresponding tool.
-
Eissa Jamil authored
Added in a section to help better understand how to use the [INST] tags, updated some examples, and added reference to the Deeplearning.AI Prompt Engineering course
-
- Mar 01, 2024
-
-
Hamid Shojanazeri authored
-
Hamid Shojanazeri authored
-
Hamid Shojanazeri authored
-
Hamid Shojanazeri authored
-
Hamid Shojanazeri authored
-
- Feb 29, 2024
-
-
Hamid Shojanazeri authored
-
Joone Hur authored
Gradio web interface is used in examples/inference.py, so we need to add gradio to requirements.txt.
-
Eissa Jamil authored
Quick update to make sure we include the codellama 70b models and llama guard model
-
- Feb 28, 2024
-
-
Eissa Jamil authored
Update for clarity Co-authored-by:
Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
-
Eissa Jamil authored
Making sure we also list the base model Co-authored-by:
Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
-
Hamid Shojanazeri authored
-
Joone Hur authored
-
Joone Hur authored
-
- Feb 27, 2024
-
-
Hamid Shojanazeri authored
-
Hamid Shojanazeri authored
-
- Feb 26, 2024
-
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
- Feb 23, 2024
-
-
Guocheng authored
-
- Feb 21, 2024
-
-
Eissa Jamil authored
-
Eissa Jamil authored
Added a version of the Prompt Engineering with Llama 2 that utilizes Amazon Bedrock Added a getting started with Llama 2 on Amazon Bedrock which should help developers quickly get started with some reference documentation, setting up credentials, and ultimately using the bedrock client and client_runtime to perform a couple examples and show the diff between prompts run against Llama 2 13b chat vs Llama 2 70b chat
-
- Feb 15, 2024
-
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-
Thierry Moreau authored
-