This project is mirrored from https://github.com/run-llama/create-llama.
Pull mirroring updated .
- Dec 04, 2023
-
-
thucpn authored
-
- Nov 28, 2023
-
-
yisding authored
Feat: add GPT4 Vision support (and file upload) to create-llama
-
- Nov 24, 2023
-
-
Marcus Schiesser authored
-
Marcus Schiesser authored
fix: set maxTokens to 4096 so vision model is not stopping too early (seems to have a lower default than other models)
-
Marcus Schiesser authored
-
Alex Yang authored
-
- Nov 23, 2023
-
-
yisding authored
-
yisding authored
Several fixes for improving compatibility with Next.JS
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
- Nov 22, 2023
- Nov 20, 2023
-
-
yisding authored
-
yisding authored
-
Laurie Voss authored
-
Laurie Voss authored
-
yisding authored
-
Laurie Voss authored
-
yisding authored
-
Laurie Voss authored
-
- Nov 19, 2023
-
-
Laurie Voss authored
-
Laurie Voss authored
-
Laurie Voss authored
-
- Nov 18, 2023
- Nov 17, 2023
-
-
Laurie Voss authored
-
Logan authored
-
yisding authored
-
yisding authored
-
yisding authored
-
yisding authored
fix: copy cache folder for vercel deployments
-
Logan Markewich authored
-
Marcus Schiesser authored
-
- Nov 16, 2023
-
-
Logan Markewich authored
-