Skip to content
Snippets Groups Projects
Unverified Commit 1eeff297 authored by ChrisDEV's avatar ChrisDEV Committed by GitHub
Browse files

Add streaming support for DenseXRetrievalPack (#12607)

parent 2254bb5d
Branches
Tags
No related merge requests found
......@@ -41,6 +41,9 @@ documents = SimpleDirectoryReader("./data").load_data()
# uses the LLM to extract propositions from every document/node!
dense_pack = DenseXRetrievalPack(documents)
# for streaming
dense_pack = DenseXRetrievalPack(documents, streaming=True)
```
The `run()` function is a light wrapper around `query_engine.query()`.
......@@ -51,4 +54,14 @@ response = dense_pack.run("What can you tell me about LLMs?")
print(response)
```
for streaming:
The `run()` function is a light wrapper around `query_engine.query()`.
```python
stream_response = dense_pack.run("What can you tell me about LLMs?")
stream_response.print_response_stream()
```
See the [notebook on llama-hub](https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/dense_x_retrieval/dense_x_retrieval.ipynb) for a full example.
......@@ -75,6 +75,7 @@ class DenseXRetrievalPack(BaseLlamaPack):
embed_model: Optional[BaseEmbedding] = None,
text_splitter: TextSplitter = SentenceSplitter(),
similarity_top_k: int = 4,
streaming: bool = False,
) -> None:
"""Init params."""
self._proposition_llm = proposition_llm or OpenAI(
......@@ -112,7 +113,9 @@ class DenseXRetrievalPack(BaseLlamaPack):
)
self.query_engine = RetrieverQueryEngine.from_args(
self.retriever, service_context=service_context
self.retriever,
service_context=service_context,
streaming=streaming,
)
async def _aget_proposition(self, node: TextNode) -> List[TextNode]:
......
......@@ -28,7 +28,7 @@ license = "MIT"
maintainers = ["logan-markewich"]
name = "llama-index-packs-dense-x-retrieval"
readme = "README.md"
version = "0.1.3"
version = "0.1.4"
[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment