diff --git a/apps/docs/docs/modules/high_level/query_engine.md b/apps/docs/docs/modules/high_level/query_engine.md
index 20dbe35b2063163d3cbcbd8e693d5653051cf287..c6d6452a199bb73801928ad4bf82879bbd9c020e 100644
--- a/apps/docs/docs/modules/high_level/query_engine.md
+++ b/apps/docs/docs/modules/high_level/query_engine.md
@@ -8,7 +8,7 @@ A query engine wraps a `Retriever` and a `ResponseSynthesizer` into a pipeline,
 
 ```typescript
 const queryEngine = index.asQueryEngine();
-const response = queryEngine.query("query string");
+const response = await queryEngine.query("query string");
 ```
 
 ## Sub Question Query Engine
diff --git a/apps/docs/docs/modules/low_level/embedding.md b/apps/docs/docs/modules/low_level/embedding.md
index 645f33459cc5ac0a8487f428d60ea89396dacb7e..57f672abef2a36cb3bdf49037a5f4f1b76d9d5f7 100644
--- a/apps/docs/docs/modules/low_level/embedding.md
+++ b/apps/docs/docs/modules/low_level/embedding.md
@@ -9,11 +9,11 @@ The embedding model in LlamaIndex is responsible for creating numerical represen
 This can be explicitly set in the `ServiceContext` object.
 
 ```typescript
-import { OpenAIEmbedding, ServiceContext } from "llamaindex";
+import { OpenAIEmbedding, serviceContextFromDefaults } from "llamaindex";
 
 const openaiEmbeds = new OpenAIEmbedding();
 
-const serviceContext = new ServiceContext({ embedModel: openaiEmbeds });
+const serviceContext = serviceContextFromDefaults({ embedModel: openaiEmbeds });
 ```
 
 ## API Reference
diff --git a/apps/docs/docs/modules/low_level/llm.md b/apps/docs/docs/modules/low_level/llm.md
index 29e8a8879104cff9a7bd57d4429dcfd59175f6dd..4f0ba2db5a49e523b317b9aea7d010381309d930 100644
--- a/apps/docs/docs/modules/low_level/llm.md
+++ b/apps/docs/docs/modules/low_level/llm.md
@@ -9,14 +9,14 @@ The LLM is responsible for reading text and generating natural language response
 The LLM can be explicitly set in the `ServiceContext` object.
 
 ```typescript
-import { ChatGPTLLMPredictor, ServiceContext } from "llamaindex";
+import { OpenAI, serviceContextFromDefaults } from "llamaindex";
 
-const openaiLLM = new ChatGPTLLMPredictor({ model: "gpt-3.5-turbo" });
+const openaiLLM = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0 });
 
-const serviceContext = new ServiceContext({ llmPredictor: openaiLLM });
+const serviceContext = serviceContextFromDefaults({ llm: openaiLLM });
 ```
 
 ## API Reference
 
-- [ChatGPTLLMPredictor](../../api/classes/ChatGPTLLMPredictor.md)
+- [OpenAI](../../api/classes/OpenAI.md)
 - [ServiceContext](../../api/interfaces/ServiceContext.md)
\ No newline at end of file