diff --git a/docs/understanding/using_llms/privacy.md b/docs/understanding/using_llms/privacy.md index ad58350814e9c50e54a3a4fb42cf6732350067e9..7a4d3488f00d7f794a7bffb539aa3d1148a98bd2 100644 --- a/docs/understanding/using_llms/privacy.md +++ b/docs/understanding/using_llms/privacy.md @@ -4,8 +4,8 @@ By default, LLamaIndex sends your data to OpenAI for generating embeddings and n ## Data Privacy -Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI have their own policies as well. +Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI has its policies as well. ## Vector stores -LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how they handle or use your data. Also by default LLamaIndex have a default option to store your embeddings locally. +LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how it handles or uses your data. Also by default, LLamaIndex has a default option to store your embeddings locally. diff --git a/docs/understanding/using_llms/using_llms.md b/docs/understanding/using_llms/using_llms.md index ce6b1d43e31a6cdaed5f91f1de62e6d3602671e0..624e5d185c5c70bb268e6975ce3230e4627c43b7 100644 --- a/docs/understanding/using_llms/using_llms.md +++ b/docs/understanding/using_llms/using_llms.md @@ -22,7 +22,7 @@ response = OpenAI().complete("Paul Graham is ") print(response) ``` -Usually you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example: +Usually, you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example: ```python from llama_index.llms.openai import OpenAI @@ -49,7 +49,7 @@ We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our ### Using a local LLM -LlamaIndex doesn't just supported hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally). +LlamaIndex doesn't just support hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally). For example, if you have [Ollama](https://github.com/ollama/ollama) installed and running: @@ -64,7 +64,7 @@ See the [custom LLM's How-To](/module_guides/models/llms/usage_custom.md) for mo ## Prompts -By default LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](/module_guides/models/prompts.md) +By default, LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](/module_guides/models/prompts.md) ```{toctree} ---