This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Feb 06, 2025
-
-
Timothy Carambat authored
Add in-text citations as well for PPLX token streaming handle timeouts for stream/buffer hanging
-
- Jan 24, 2025
-
-
timothycarambat authored
-
Sean Hatfield authored
* bump perplexity models --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Sep 12, 2024
-
-
Timothy Carambat authored
-
- Aug 19, 2024
-
- Jul 31, 2024
-
-
timothycarambat authored
-
- Jul 28, 2024
-
-
timothycarambat authored
closes #1990
-
- Jul 19, 2024
-
-
Timothy Carambat authored
* Added Supported Models Free Tier - chat_models.txt Need to fill in correct Parameter Count. * Bump perplexity model closes #1901 closes #1900 --------- Co-authored-by:
Tim-Hoekstra <135951177+Tim-Hoekstra@users.noreply.github.com>
-
- Apr 25, 2024
-
-
timothycarambat authored
resolves #1188
-
- Apr 14, 2024
-
-
Timothy Carambat authored
-
- Feb 24, 2024
-
-
Timothy Carambat authored
bump pplx model support
-
- Feb 22, 2024
-
-
Sean Hatfield authored
* add LLM support for perplexity * update README & example env * fix ENV keys in example env files * slight changes for QA of perplexity support * Update Perplexity AI name --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-