-
- Downloads
set default max_tokens to 1024 (#13371)
if unset the service decides and picks a value small enough that users report the model is broken or not accurate
Showing
- llama-index-integrations/llms/llama-index-llms-nvidia/llama_index/llms/nvidia/base.py 2 additions, 0 deletions...s/llama-index-llms-nvidia/llama_index/llms/nvidia/base.py
- llama-index-integrations/llms/llama-index-llms-nvidia/pyproject.toml 1 addition, 1 deletion...-integrations/llms/llama-index-llms-nvidia/pyproject.toml
Loading
Please register or sign in to comment