diff --git a/recipes/code_llama/README.md b/recipes/code_llama/README.md index d5f4bda52e2576d9e6fc33d049129d7c1ec0e54d..ef1be5e83731df0527483695f7c230e7f9acdd82 100644 --- a/recipes/code_llama/README.md +++ b/recipes/code_llama/README.md @@ -4,7 +4,7 @@ Code llama was recently released with three flavors, base-model that support mul Find the scripts to run Code Llama, where there are two examples of running code completion and infilling. -**Note** Please find the right model on HF side [here](https://huggingface.co/codellama). +**Note** Please find the right model on HF [here](https://huggingface.co/models?search=meta-llama%20codellama). Make sure to install Transformers from source for now @@ -36,4 +36,4 @@ To run the 70B Instruct model example run the following (you'll need to enter th python code_instruct_example.py --model_name codellama/CodeLlama-70b-Instruct-hf --temperature 0.2 --top_p 0.9 ``` -You can learn more about the chat prompt template [on HF](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf#chat-prompt) and [original Code Llama repository](https://github.com/facebookresearch/codellama/blob/main/README.md#fine-tuned-instruction-models). HF tokenizer has already taken care of the chat template as shown in this example. +You can learn more about the chat prompt template [on HF](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf#chat-prompt) and [original Code Llama repository](https://github.com/meta-llama/codellama/blob/main/README.md#fine-tuned-instruction-models). HF tokenizer has already taken care of the chat template as shown in this example.