diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index ceabd9053dd899de3d7ede9cb3c41f361c984670..8dbca4ad914f2498e3938038fac6b33f8a615fe4 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -18,7 +18,7 @@ Also, join our Discord for ideas and discussions: <https://discord.gg/dGcwcsnxhU
 
 ### 1. 🆕 Extend Core Modules
 
-The most impactful way to contribute to LlamaIndex is extending our core modules:
+The most impactful way to contribute to LlamaIndex is by extending our core modules:
 ![LlamaIndex modules](https://github.com/jerryjliu/llama_index/raw/main/docs/_static/contribution/contrib.png)
 
 We welcome contributions in _all_ modules shown above.
@@ -52,7 +52,7 @@ A data loader ingests data of any format from anywhere into `Document` objects,
 - [Github Repository Loader](https://github.com/emptycrown/llama-hub/tree/main/llama_hub/github_repo)
 
 Contributing a data loader is easy and super impactful for the community.
-The preferred way to contribute is making a PR at [LlamaHub Github](https://github.com/emptycrown/llama-hub).
+The preferred way to contribute is by making a PR at [LlamaHub Github](https://github.com/emptycrown/llama-hub).
 
 **Ideas**
 
@@ -62,8 +62,8 @@ The preferred way to contribute is making a PR at [LlamaHub Github](https://gith
 
 #### Node Parser
 
-A node parser parses `Document` objects into `Node` objects (atomic unit of data that LlamaIndex operates over, e.g., chunk of text, image, or table).
-It is responsible for splitting text (via text splitters) and explicitly modelling the relationship between units of data (e.g. A is the source of B, C is a chunk after D).
+A node parser parses `Document` objects into `Node` objects (atomic units of data that LlamaIndex operates over, e.g., chunk of text, image, or table).
+It is responsible for splitting text (via text splitters) and explicitly modeling the relationship between units of data (e.g. A is the source of B, C is a chunk after D).
 
 **Interface**: `get_nodes_from_documents` takes a sequence of `Document` objects as input, and outputs a sequence of `Node` objects.
 
@@ -75,7 +75,7 @@ See [the API reference](https://docs.llamaindex.ai/en/latest/api_reference/index
 
 **Ideas**:
 
-- Add new `Node` relationships to model to model hierarchical documents (e.g. play-act-scene, chapter-section-heading).
+- Add new `Node` relationships to model hierarchical documents (e.g. play-act-scene, chapter-section-heading).
 
 ---
 
@@ -122,7 +122,7 @@ These serve as the main data store and retrieval engine for our vector index.
 
 **Interface**:
 
-- `add` takes in a sequence of `NodeWithEmbeddings` and insert the embeddings (and possibly the node contents & metadata) into the vector store.
+- `add` takes in a sequence of `NodeWithEmbeddings` and inserts the embeddings (and possibly the node contents & metadata) into the vector store.
 - `delete` removes entries given document IDs.
 - `query` retrieves top-k most similar entries given a query embedding.
 
@@ -145,7 +145,7 @@ See [reference](https://docs.llamaindex.ai/en/stable/api_reference/indices/vecto
 
 Our retriever classes are lightweight classes that implement a `retrieve` method.
 They may take in an index class as input - by default, each of our indices
-(list, vector, keyword) have an associated retriever. The output is a set of
+(list, vector, keyword) has an associated retriever. The output is a set of
 `NodeWithScore` objects (a `Node` object with an extra `score` field).
 
 You may also choose to implement your own retriever classes on top of your own
@@ -174,7 +174,7 @@ Our query engine classes are lightweight classes that implement a `query` method
 For instance, they may take in a retriever class as input; our `RetrieverQueryEngine`
 takes in a `retriever` as input as well as a `BaseSynthesizer` class for response synthesis, and
 the `query` method performs retrieval and synthesis before returning the final result.
-They may take in other query engine classes in as input too.
+They may take in other query engine classes as input too.
 
 **Interface**:
 
@@ -217,7 +217,7 @@ A token usage optimizer refines the retrieved `Nodes` to reduce token usage duri
 
 #### Node Postprocessors
 
-A node postprocessor refines a list of retrieve nodes given configuration and context.
+A node postprocessor refines a list of retrieved nodes given configuration and context.
 
 **Interface**: `postprocess_nodes` takes a list of `Nodes` and extra metadata (e.g. similarity and query), and outputs a refined list of `Nodes`.
 
@@ -231,7 +231,7 @@ A node postprocessor refines a list of retrieve nodes given configuration and co
 
 #### Output Parsers
 
-A output parser enables us to extract structured output from the plain text output generated by the LLM.
+An output parser enables us to extract structured output from the plain text output generated by the LLM.
 
 **Interface**:
 
diff --git a/docs/DOCS_README.md b/docs/DOCS_README.md
index 3e9faa1e102968ff4be9a6ccbd0307165ffad31a..749229369ff8cd5a58b9149befa57d4cf0ddb632 100644
--- a/docs/DOCS_README.md
+++ b/docs/DOCS_README.md
@@ -6,7 +6,7 @@ The `docs` directory contains the sphinx source text for LlamaIndex docs, visit
 https://docs.llamaindex.ai/en/stable/ to read the full documentation.
 
 This guide is made for anyone who's interested in running LlamaIndex documentation locally,
-making changes to it and make contributions. LlamaIndex is made by the thriving community
+making changes to it and making contributions. LlamaIndex is made by the thriving community
 behind it, and you're always welcome to make contributions to the project and the
 documentation.
 
diff --git a/docs/community/llama_packs/root.md b/docs/community/llama_packs/root.md
index 8c7ad1e1c4e51de0f218c78e9c9a0dac0ce8c9b5..fa59d116b2a2438711bd7c5dffd1cef317ee0a8f 100644
--- a/docs/community/llama_packs/root.md
+++ b/docs/community/llama_packs/root.md
@@ -8,8 +8,8 @@ This directly tackles a big pain point in building LLM apps; every use case requ
 
 They can be used in two ways:
 
-- On one hand, they are **prepackaged modules** that can be initialized with parameters and run out of the box to achieve a given use case (whether that’s a full RAG pipeline, application template, and more). You can also import submodules (e.g. LLMs, query engines) to use directly.
-- On another hand, LlamaPacks are **templates** that you can inspect, modify, and use.
+- On one hand, they are **prepackaged modules** that can be initialized with parameters and run out of the box to achieve a given use case (whether that’s a full RAG pipeline, application template, or more). You can also import submodules (e.g. LLMs, query engines) to use directly.
+- On the other hand, LlamaPacks are **templates** that you can inspect, modify, and use.
 
 **All packs are found on [LlamaHub](https://llamahub.ai/).** Go to the dropdown menu and select "LlamaPacks" to filter by packs.
 
diff --git a/docs/understanding/storing/storing.md b/docs/understanding/storing/storing.md
index e344c7929b5fc7a97887ae0782abaf24942033f1..862ee180b242537afac462dcbb5bf1b97f98c8ec 100644
--- a/docs/understanding/storing/storing.md
+++ b/docs/understanding/storing/storing.md
@@ -10,7 +10,7 @@ The simplest way to store your indexed data is to use the built-in `.persist()`
 index.storage_context.persist(persist_dir="<persist_dir>")
 ```
 
-Here is an example for Composable Graph:
+Here is an example of a Composable Graph:
 
 ```python
 graph.root_index.storage_context.persist(persist_dir="<persist_dir>")
diff --git a/docs/use_cases/q_and_a.md b/docs/use_cases/q_and_a.md
index c2fa223d60497b12828d5faf198f4421fddc56ac..b5a69e3b43511ce06b232a9cdcd0b3da4b524a5c 100644
--- a/docs/use_cases/q_and_a.md
+++ b/docs/use_cases/q_and_a.md
@@ -33,4 +33,4 @@ Q&A has all sorts of sub-types, such as:
 
 ## Further examples
 
-For further examples of Q&A use cases, see our [Q&A section in Putting it All Together](/understanding/putting_it_all_together/q_and_a.html).
+For further examples of Q&A use cases, see our [Q&A section in Putting it All Together](/understanding/putting_it_all_together/q_and_a.md).
diff --git a/experimental/classifier/utils.py b/experimental/classifier/utils.py
index 877dfd7adb34c37871cfc33857d190c29e5ca9cb..abf808c8201cd6305195ffbfaf1a2ed42a6aec37 100644
--- a/experimental/classifier/utils.py
+++ b/experimental/classifier/utils.py
@@ -68,7 +68,7 @@ def extract_float_given_response(response: str, n: int = 1) -> Optional[float]:
         if new_numbers is None:
             return None
         else:
-            return float(numbers[0])
+            return float(new_numbers[0])
     else:
         return float(numbers[0])