From 9be92917179b10389de2d7745506902c0942ad4b Mon Sep 17 00:00:00 2001 From: Jeff Tang <jeffxtang@meta.com> Date: Sat, 7 Oct 2023 22:53:31 -0700 Subject: [PATCH] updated readme - links to HelloLlama in the headers - typo fix --- llama-demo-apps/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llama-demo-apps/README.md b/llama-demo-apps/README.md index 0f21e376..7d593d73 100644 --- a/llama-demo-apps/README.md +++ b/llama-demo-apps/README.md @@ -31,7 +31,7 @@ The HelloLlama cloud version uses LangChain with Llama2 hosted in the cloud on [ [Note on using Replicate](#replicate_note) To run some of the demo apps here, you'll need to first sign in with Replicate with your github account, then create a free API token [here](https://replicate.com/account/api-tokens) that you can use for a while. After the free trial ends, you'll need to enter billing info to continue to use Llama2 hosted on Replicate - according to Replicate's [Run time and cost](https://replicate.com/meta/llama-2-13b-chat) for the Llama2-13b-chat model used in our demo apps, the model "costs $0.000725 per second. Predictions typically complete within 10 seconds." This means each call to the Llama2-13b-chat model costs less than $0.01 if the call completes within 10 seconds. If you want absolutely no costs, you can refer to the section "Running Llama2 locally on Mac" above or the "Running Llama2 in Google Colab" below. -### [Running Llama2 in Google Colab]((https://colab.research.google.com/drive/1-uBXt4L-6HNS2D8Iny2DwUpVS4Ub7jnk?usp=sharing)) +### [Running Llama2 in Google Colab](https://colab.research.google.com/drive/1-uBXt4L-6HNS2D8Iny2DwUpVS4Ub7jnk?usp=sharing) To run Llama2 in Google Colab using [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), download the quantized Llama2-13b-chat model `ggml-model-q4_0.gguf` [here](https://drive.google.com/file/d/1afPv3HOy73BE2MoYCgYJvBDeQNa9rZbj/view?usp=sharing), or follow the instructions above to build it, before uploading it to your Google drive. Note that on the free Colab T4 GPU, the call to Llama could take more than 20 minnutes to return; running the notebook locally on M1 MBP takes about 20 seconds. * To run a quantized Llama2 model on iOS and Android, you can use the open source [MLC LLM](https://github.com/mlc-ai/mlc-llm) or [llama.cpp](https://github.com/ggerganov/llama.cpp). You can even make a Linux OS that boots to Llama2 ([repo](https://github.com/trholding/llama2.c)). -- GitLab