Skip to content
Snippets Groups Projects
Unverified Commit 161801a2 authored by Logan's avatar Logan Committed by GitHub
Browse files

Logan/readme update for genai (#18059)

parent 48e5010d
No related branches found
No related tags found
No related merge requests found
# LlamaIndex Llms Integration: Gemini
# LlamaIndex Llms Integration: Google GenAI
## Installation
1. Install the required Python packages:
```bash
%pip install llama-index-llms-genai
!pip install -q llama-index google-genai
%pip install llama-index-llms-google-genai
```
2. Set the Google API key as an environment variable:
......@@ -22,9 +21,10 @@
To generate a poem using the Gemini model, use the following code:
```python
from llama_index.llms.genai import Gemini
from llama_index.llms.google_genai import GoogleGenAI
resp = Gemini().complete("Write a poem about a magic backpack")
llm = GoogleGenAI(model="gemini-2.0-flash")
resp = llm.complete("Write a poem about a magic backpack")
print(resp)
```
......@@ -34,7 +34,7 @@ To simulate a conversation, send a list of messages:
```python
from llama_index.core.llms import ChatMessage
from llama_index.llms.genai import Gemini
from llama_index.llms.google_genai import GoogleGenAI
messages = [
ChatMessage(role="user", content="Hello friend!"),
......@@ -43,7 +43,9 @@ messages = [
role="user", content="Help me decide what to have for dinner."
),
]
resp = Gemini().chat(messages)
llm = GoogleGenAI(model="gemini-2.0-flash")
resp = llm.chat(messages)
print(resp)
```
......@@ -52,9 +54,9 @@ print(resp)
To stream content responses in real-time:
```python
from llama_index.llms.genai import Gemini
from llama_index.llms.google_genai import GoogleGenAI
llm = Gemini()
llm = GoogleGenAI(model="gemini-2.0-flash")
resp = llm.stream_complete(
"The story of Sourcrust, the bread creature, is really interesting. It all started when..."
)
......@@ -65,10 +67,10 @@ for r in resp:
To stream chat responses:
```python
from llama_index.llms.genai import Gemini
from llama_index.core.llms import ChatMessage
from llama_index.llms.google_genai import GoogleGenAI
llm = Gemini()
llm = GoogleGenAI(model="gemini-2.0-flash")
messages = [
ChatMessage(role="user", content="Hello friend!"),
ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
......@@ -84,9 +86,9 @@ resp = llm.stream_chat(messages)
To use a specific model, you can configure it like this:
```python
from llama_index.llms.genai import Gemini
from llama_index.llms.google_genai import GoogleGenAI
llm = Gemini(model="models/gemini-pro")
llm = GoogleGenAI(model="models/gemini-pro")
resp = llm.complete("Write a short, but joyous, ode to LlamaIndex")
print(resp)
```
......@@ -96,9 +98,9 @@ print(resp)
To use the asynchronous completion API:
```python
from llama_index.llms.genai import Gemini
from llama_index.llms.google_genai import GoogleGenAI
llm = Gemini()
llm = GoogleGenAI(model="models/gemini-pro")
resp = await llm.acomplete("Llamas are famous for ")
print(resp)
```
......
......@@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
license = "MIT"
name = "llama-index-llms-google-genai"
readme = "README.md"
version = "0.1.0"
version = "0.1.0.post1"
[tool.poetry.dependencies]
python = ">=3.9,<4.0"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment