Skip to content
Snippets Groups Projects
Unverified Commit 660cf70a authored by Shorthills AI's avatar Shorthills AI Committed by GitHub
Browse files

Fixed some gramatical mistakes in several doc files (#10627)

parent 254d4f2b
Branches
Tags
No related merge requests found
...@@ -6,7 +6,7 @@ If you like learning from videos, now's a good time to check out our "Discover L ...@@ -6,7 +6,7 @@ If you like learning from videos, now's a good time to check out our "Discover L
This is a sub-series within Discover LlamaIndex that shows you how to build a document chatbot from scratch. This is a sub-series within Discover LlamaIndex that shows you how to build a document chatbot from scratch.
We show you how to do this in a "bottoms-up" fashion - start by using the LLMs, data objects as independent modules. Then gradually add higher-level abstractions like indexing, and advanced retrievers/rerankers. We show you how to do this in a "bottoms-up" fashion - start by using the LLMs, and data objects as independent modules. Then gradually add higher-level abstractions like indexing, and advanced retrievers/rerankers.
[Full Repo](https://github.com/run-llama/llama_docs_bot) [Full Repo](https://github.com/run-llama/llama_docs_bot)
[[Part 1] LLMs and Prompts](https://www.youtube.com/watch?v=p0jcvGiBKSA) [[Part 1] LLMs and Prompts](https://www.youtube.com/watch?v=p0jcvGiBKSA)
...@@ -25,7 +25,7 @@ This video covers the `SubQuestionQueryEngine` and how it can be applied to fina ...@@ -25,7 +25,7 @@ This video covers the `SubQuestionQueryEngine` and how it can be applied to fina
## Discord Document Management ## Discord Document Management
This video covers managing documents from a source that is constantly updating (i.e Discord) and how you can avoid document duplication and save embedding tokens. This video covers managing documents from a source that is constantly updating (i.e. Discord) and how you can avoid document duplication and save embedding tokens.
[Youtube](https://www.youtube.com/watch?v=j6dJcODLd_c) [Youtube](https://www.youtube.com/watch?v=j6dJcODLd_c)
......
...@@ -52,7 +52,7 @@ pip install llama-index-core llama-index-readers-file llama-index-llms-ollama ll ...@@ -52,7 +52,7 @@ pip install llama-index-core llama-index-readers-file llama-index-llms-ollama ll
[Check out our Starter Example with Local Models](starter_example_local.md) [Check out our Starter Example with Local Models](starter_example_local.md)
A full guide to using and configuring LLMs available [here](/module_guides/models/llms.md). A full guide to using and configuring LLMs is available [here](/module_guides/models/llms.md).
A full guide to using and configuring embedding models is available [here](/module_guides/models/embeddings.md). A full guide to using and configuring embedding models is available [here](/module_guides/models/embeddings.md).
...@@ -63,7 +63,7 @@ Git clone this repository: `git clone https://github.com/jerryjliu/llama_index.g ...@@ -63,7 +63,7 @@ Git clone this repository: `git clone https://github.com/jerryjliu/llama_index.g
- [Install poetry](https://python-poetry.org/docs/#installation) - this will help you manage package dependencies - [Install poetry](https://python-poetry.org/docs/#installation) - this will help you manage package dependencies
- `poetry shell` - this command creates a virtual environment, which keeps installed packages contained to this project - `poetry shell` - this command creates a virtual environment, which keeps installed packages contained to this project
- `poetry install` - this will install the core starter package requirements - `poetry install` - this will install the core starter package requirements
- (Optional) `poetry install --with dev,docs` - this will install all dependencies needed for most local development - (Optional) `poetry install --with dev, docs` - this will install all dependencies needed for most local development
From there, you can install integrations as needed with `pip`, For example: From there, you can install integrations as needed with `pip`, For example:
......
...@@ -14,7 +14,7 @@ Our docs are structured so you should be able to roughly progress simply by movi ...@@ -14,7 +14,7 @@ Our docs are structured so you should be able to roughly progress simply by movi
1. **Getting started** 1. **Getting started**
The section you're in right now. We can get you going from knowing nothing about LlamaIndex and LLMs. [Install the library](installation.md), write your first demo in [five lines of code](starter_example.md), learn more about the [high level concepts](concepts.md) of LLM applications and then see how you can [customize the five-line example](customization.rst) to meet your needs. The section you're in right now. We can get you going from knowing nothing about LlamaIndex and LLMs. [Install the library](installation.md), write your first demo in [five lines of code](starter_example.md), learn more about the [high level concepts](concepts.md) of LLM applications, and then see how you can [customize the five-line example](customization.rst) to meet your needs.
2. **Use cases** 2. **Use cases**
...@@ -22,7 +22,7 @@ Our docs are structured so you should be able to roughly progress simply by movi ...@@ -22,7 +22,7 @@ Our docs are structured so you should be able to roughly progress simply by movi
3. **Understanding LlamaIndex** 3. **Understanding LlamaIndex**
Once you've completed the Getting Started section, this is the next place to go. In a series of bite-sized tutorials we'll walk you through every stage of building a production LlamaIndex application and help you level up on the concepts of the library and LLMs in general as you go. Once you've completed the Getting Started section, this is the next place to go. In a series of bite-sized tutorials, we'll walk you through every stage of building a production LlamaIndex application and help you level up on the concepts of the library and LLMs in general as you go.
4. **Optimizing** 4. **Optimizing**
......
...@@ -24,7 +24,7 @@ LlamaIndex uses OpenAI's `gpt-3.5-turbo` by default. Make sure your API key is a ...@@ -24,7 +24,7 @@ LlamaIndex uses OpenAI's `gpt-3.5-turbo` by default. Make sure your API key is a
export OPENAI_API_KEY=XXXXX export OPENAI_API_KEY=XXXXX
``` ```
and on windows it is and on Windows it is
``` ```
set OPENAI_API_KEY=XXXXX set OPENAI_API_KEY=XXXXX
...@@ -111,7 +111,7 @@ else: ...@@ -111,7 +111,7 @@ else:
storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR) storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR)
index = load_index_from_storage(storage_context) index = load_index_from_storage(storage_context)
# either way we can now query the index # Either way we can now query the index
query_engine = index.as_query_engine() query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?") response = query_engine.query("What did the author do growing up?")
print(response) print(response)
......
...@@ -14,7 +14,7 @@ The easiest way to get it is to [download it via this link](https://raw.githubus ...@@ -14,7 +14,7 @@ The easiest way to get it is to [download it via this link](https://raw.githubus
## Setup ## Setup
Ollama is a tool to help you get setup with LLMs locally (currently supported on OSX and Linux. You can install Ollama on Windows through WSL 2). Ollama is a tool to help you get set up with LLMs locally (currently supported on OSX and Linux. You can install Ollama on Windows through WSL 2).
Follow the [README](https://github.com/jmorganca/ollama) to learn how to install it. Follow the [README](https://github.com/jmorganca/ollama) to learn how to install it.
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
With the introduction of LlamaIndex v0.10.0, there were several changes With the introduction of LlamaIndex v0.10.0, there were several changes
- integrations have seperate `pip install`s (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139)) - integrations have separate `pip installs (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
- many imports changed - many imports changed
- the service context was deprecated - the service context was deprecated
...@@ -46,13 +46,13 @@ llamaindex-cli upgrade-file <file_path> ...@@ -46,13 +46,13 @@ llamaindex-cli upgrade-file <file_path>
llamaindex-cli upgrade <folder_path> llamaindex-cli upgrade <folder_path>
``` ```
For notebooks, new `pip install` statements are inserting and imports are updated. For notebooks, new `pip install` statements are inserted and imports are updated.
For `.py` and `.md` files, import statements are also updated, and new requirements are printed to the terminal. For `.py` and `.md` files, import statements are also updated, and new requirements are printed to the terminal.
## Deprecated ServiceContext ## Deprecated ServiceContext
In addition to import changes, the existing `ServiceContext` has been deprecated. While it will be supported for a limited time, the preffered way of setting up the same options will be either globally in the `Settings` object or locally in the APIs that use certain modules. In addition to import changes, the existing `ServiceContext` has been deprecated. While it will be supported for a limited time, the preferred way of setting up the same options will be either globally in the `Settings` object or locally in the APIs that use certain modules.
For example, before you might have had: For example, before you might have had:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment