Skip to content
Snippets Groups Projects
Commit 8ac44ef3 authored by Matthias Reso's avatar Matthias Reso
Browse files

Fix vocab size mismatch in inference due to added pad token

parent 40b32ba5
No related branches found
No related tags found
No related merge requests found
...@@ -72,13 +72,7 @@ def main( ...@@ -72,13 +72,7 @@ def main(
print("Module 'optimum' not found. Please install 'optimum' it before proceeding.") print("Module 'optimum' not found. Please install 'optimum' it before proceeding.")
tokenizer = LlamaTokenizer.from_pretrained(model_name) tokenizer = LlamaTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens( tokenizer.pad_token = tokenizer.eos_token
{
"pad_token": "<PAD>",
}
)
model.resize_token_embeddings(model.config.vocab_size + 1)
safety_checker = get_safety_checker(enable_azure_content_safety, safety_checker = get_safety_checker(enable_azure_content_safety,
enable_sensitive_topics, enable_sensitive_topics,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment