Skip to content
Snippets Groups Projects
Unverified Commit a7d8a105 authored by Shorthills AI's avatar Shorthills AI Committed by GitHub
Browse files

Fixed some gramatical mistakes (#11840)

* Fixed some gramatical mistakes (#82)

* Update discover_llamaindex.md

* Update installation.md

* Update reading.md

* Update starter_example.md

* Update starter_example_local.md

* Update v0_10_0_migration.md

* Update 2024-02-28-rag-bootcamp-vector-institute.ipynb

* Update multimodal.md

* Update chatbots.md

* Fixed some minor documentions errors (#83)

* Update privacy.md

* Update using_llms.md
parent 6cd92aff
Branches
Tags
No related merge requests found
...@@ -4,8 +4,8 @@ By default, LLamaIndex sends your data to OpenAI for generating embeddings and n ...@@ -4,8 +4,8 @@ By default, LLamaIndex sends your data to OpenAI for generating embeddings and n
## Data Privacy ## Data Privacy
Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI have their own policies as well. Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI has its policies as well.
## Vector stores ## Vector stores
LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how they handle or use your data. Also by default LLamaIndex have a default option to store your embeddings locally. LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how it handles or uses your data. Also by default, LLamaIndex has a default option to store your embeddings locally.
...@@ -22,7 +22,7 @@ response = OpenAI().complete("Paul Graham is ") ...@@ -22,7 +22,7 @@ response = OpenAI().complete("Paul Graham is ")
print(response) print(response)
``` ```
Usually you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example: Usually, you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example:
```python ```python
from llama_index.llms.openai import OpenAI from llama_index.llms.openai import OpenAI
...@@ -49,7 +49,7 @@ We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our ...@@ -49,7 +49,7 @@ We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our
### Using a local LLM ### Using a local LLM
LlamaIndex doesn't just supported hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally). LlamaIndex doesn't just support hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally).
For example, if you have [Ollama](https://github.com/ollama/ollama) installed and running: For example, if you have [Ollama](https://github.com/ollama/ollama) installed and running:
...@@ -64,7 +64,7 @@ See the [custom LLM's How-To](/module_guides/models/llms/usage_custom.md) for mo ...@@ -64,7 +64,7 @@ See the [custom LLM's How-To](/module_guides/models/llms/usage_custom.md) for mo
## Prompts ## Prompts
By default LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](/module_guides/models/prompts.md) By default, LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](/module_guides/models/prompts.md)
```{toctree} ```{toctree}
--- ---
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment