From ae97cffe5b5b34eb7ee0f7ef240a9ce16fba835e Mon Sep 17 00:00:00 2001 From: Tacito Vito Westerberg <9747476+bartoncreek@users.noreply.github.com> Date: Mon, 24 Jul 2023 11:44:43 -0500 Subject: [PATCH] cosmetic grammar maven cleanups to markdown document (#7021) --- docs/getting_started/concepts.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/docs/getting_started/concepts.md b/docs/getting_started/concepts.md index 6b13862803..417c7df253 100644 --- a/docs/getting_started/concepts.md +++ b/docs/getting_started/concepts.md @@ -32,9 +32,7 @@ A data connector (i.e. `Reader`) ingest data from different data sources and dat [**Documents / Nodes**](/core_modules/data_modules/documents_and_nodes/root.md): A `Document` is a generic container around any data source - for instance, a PDF, an API output, or retrieved data from a database. A `Node` is the atomic unit of data in LlamaIndex and represents a "chunk" of a source `Document`. It's a rich representation that includes metadata and relationships (to other nodes) to enable accurate and expressive retrieval operations. [**Data Indexes**](/core_modules/data_modules/index/root.md): -Once you've ingested your data, LlamaIndex help you index data into a format that's easy to retrieve. -Under the hood, LlamaIndex parse the raw documents into intermediate representations, calculate vector embeddings, and infer metadata, etc. -The most commonly used index is the [VectorStoreIndex](/core_modules/data_modules/index/vector_store_guide.ipynb) +Once you've ingested your data, LlamaIndex will help you index the data into a format that's easy to retrieve. Under the hood, LlamaIndex parses the raw documents into intermediate representations, calculates vector embeddings, and infers metadata. The most commonly used index is the [VectorStoreIndex](/core_modules/data_modules/index/vector_store_guide.ipynb) ### Querying Stage In the querying stage, the RAG pipeline retrieves the most relevant context given a user query, @@ -51,7 +49,7 @@ These building blocks can be customized to reflect ranking preferences, as well #### Building Blocks [**Retrievers**](/core_modules/query_modules/retriever/root.md): A retriever defines how to efficiently retrieve relevant context from a knowledge base (i.e. index) when given a query. -The specific retrieval logic differs for difference indices, the most popular being dense retrieval against a vector index. +The specific retrieval logic differs for different indices, the most popular being dense retrieval against a vector index. [**Node Postprocessors**](/core_modules/query_modules/node_postprocessors/root.md): A node postprocessor takes in a set of nodes, then apply transformation, filtering, or re-ranking logic to them. @@ -80,4 +78,4 @@ This gives it additional flexibility to tackle more complex tasks. * tell me how to [customize things](/getting_started/customization.rst). * curious about a specific module? Check out the module guides 👈 * have a use case in mind? Check out the [end-to-end tutorials](/end_to_end_tutorials/use_cases.md) -``` \ No newline at end of file +``` -- GitLab