From a7d8a1055f34202adcf520c03a3142b589a3a46d Mon Sep 17 00:00:00 2001 From: Shorthills AI <141953346+ShorthillsAI@users.noreply.github.com> Date: Tue, 12 Mar 2024 19:33:32 +0530 Subject: [PATCH] Fixed some gramatical mistakes (#11840) * Fixed some gramatical mistakes (#82) * Update discover_llamaindex.md * Update installation.md * Update reading.md * Update starter_example.md * Update starter_example_local.md * Update v0_10_0_migration.md * Update 2024-02-28-rag-bootcamp-vector-institute.ipynb * Update multimodal.md * Update chatbots.md * Fixed some minor documentions errors (#83) * Update privacy.md * Update using_llms.md --- docs/understanding/using_llms/privacy.md | 4 ++-- docs/understanding/using_llms/using_llms.md | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/understanding/using_llms/privacy.md b/docs/understanding/using_llms/privacy.md index ad58350814..7a4d3488f0 100644 --- a/docs/understanding/using_llms/privacy.md +++ b/docs/understanding/using_llms/privacy.md @@ -4,8 +4,8 @@ By default, LLamaIndex sends your data to OpenAI for generating embeddings and n ## Data Privacy -Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI have their own policies as well. +Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI has its policies as well. ## Vector stores -LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how they handle or use your data. Also by default LLamaIndex have a default option to store your embeddings locally. +LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how it handles or uses your data. Also by default, LLamaIndex has a default option to store your embeddings locally. diff --git a/docs/understanding/using_llms/using_llms.md b/docs/understanding/using_llms/using_llms.md index ce6b1d43e3..624e5d185c 100644 --- a/docs/understanding/using_llms/using_llms.md +++ b/docs/understanding/using_llms/using_llms.md @@ -22,7 +22,7 @@ response = OpenAI().complete("Paul Graham is ") print(response) ``` -Usually you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example: +Usually, you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example: ```python from llama_index.llms.openai import OpenAI @@ -49,7 +49,7 @@ We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our ### Using a local LLM -LlamaIndex doesn't just supported hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally). +LlamaIndex doesn't just support hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally). For example, if you have [Ollama](https://github.com/ollama/ollama) installed and running: @@ -64,7 +64,7 @@ See the [custom LLM's How-To](/module_guides/models/llms/usage_custom.md) for mo ## Prompts -By default LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](/module_guides/models/prompts.md) +By default, LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](/module_guides/models/prompts.md) ```{toctree} --- -- GitLab