From 07991603ff038150fff076c04ab15cc2bf6ac945 Mon Sep 17 00:00:00 2001 From: sekyonda <127536312+sekyondaMeta@users.noreply.github.com> Date: Fri, 21 Jul 2023 20:10:51 -0400 Subject: [PATCH] Update inference.md --- docs/inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/inference.md b/docs/inference.md index 144431bb..475251ae 100644 --- a/docs/inference.md +++ b/docs/inference.md @@ -31,7 +31,7 @@ inference/samsum_prompt.txt The inference folder also includes a chat completion example, that adds built-in safety features in fine-tuned models to the prompt tokens. To run the example: ```bash -python chat_completion.py --model_name "PATH/TO/MODEL/7B/" --prompt_file chats.json --quantization --use_auditnlg +python inference/chat_completion.py --model_name "PATH/TO/MODEL/7B/" --prompt_file chats.json --quantization --use_auditnlg ``` -- GitLab