From 6105a3f886319948886e199fa241ac2997e05838 Mon Sep 17 00:00:00 2001
From: Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
Date: Mon, 28 Aug 2023 22:45:12 +0000
Subject: [PATCH] clarifying the infilling use-case

---
 docs/inference.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/inference.md b/docs/inference.md
index a08f18a6..342bc3bc 100644
--- a/docs/inference.md
+++ b/docs/inference.md
@@ -50,7 +50,7 @@ python inference/chat_completion.py --model_name "PATH/TO/MODEL/7B/" --prompt_fi
 ```
 **Code Llama**
 
-Code llama was recently released with three flavors, base-model that support multiple programming languages, Python fine-tuned model and an instruction fine-tuned and aligned variation of Code Llama, please read more [here](https://ai.meta.com/blog/code-llama-large-language-model-coding/). Also note that the Python fine-tuned model and 34B models are not trained on infilling objective, hence can not be used for this use-case.
+Code llama was recently released with three flavors, base-model that support multiple programming languages, Python fine-tuned model and an instruction fine-tuned and aligned variation of Code Llama, please read more [here](https://ai.meta.com/blog/code-llama-large-language-model-coding/). Also note that the Python fine-tuned model and 34B models are not trained on infilling objective, hence can not be used for infilling use-case.
 
 Find the scripts to run Code Llama [here](../inference/code-llama/), where there are two examples of running code completion and infilling.
 
-- 
GitLab