Skip to content
Snippets Groups Projects
Unverified Commit c30b5ce8 authored by Dmitrii Khizbullin's avatar Dmitrii Khizbullin Committed by GitHub
Browse files

Update notebooks to the public repo (#9)

parent 26d76bdd
No related branches found
No related tags found
No related merge requests found
......@@ -45,7 +45,7 @@ At a granular level, GPTSwarm is a library that includes the following component
**Clone the repo**
```bash
git clone --recurse-submodules https://github.com/mczhuge/GPTSwarm.git
git clone https://github.com/metauto-ai/GPTSwarm.git
cd GPTSwarm/
```
......
%% Cell type:markdown id: tags:
# Creating, registering and running a custom agent in GPTSwarm
%% Cell type:code id: tags:
```
from google.colab import userdata
import os
os.environ['GITHUB_TOKEN'] = userdata.get('GITHUB_TOKEN')
os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
```
%% Cell type:code id: tags:
```
!git clone https://$GITHUB_TOKEN@github.com/mczhuge/GPTSwarm.git
!git clone https://github.com/metauto-ai/GPTSwarm.git
```
%% Cell type:code id: tags:
```
%cd GPTSwarm
```
%% Cell type:code id: tags:
```
!rm requirements_py310_macos.txt
!touch requirements_py310_macos.txt
```
%% Cell type:code id: tags:
```
!pip install -r requirements_colab.txt
```
%% Cell type:code id: tags:
```
!pip install -e .
```
%% Cell type:markdown id: tags:
We create a custom operation that is a child of a base class Node. In this example the operation is a step of a chain-of-thought.
%% Cell type:code id: tags:
```
from swarm.llm.format import Message
from swarm.graph import Node
from typing import List, Any, Optional
from swarm.environment.prompt.prompt_set_registry import PromptSetRegistry
from swarm.llm.format import Message
from swarm.llm import LLMRegistry
class CoTStep(Node):
def __init__(self,
domain: str,
model_name: Optional[str],
is_last_step: bool,
operation_description: str = "Make one step of CoT",
id=None):
super().__init__(operation_description, id, True)
self.domain = domain
self.model_name = model_name
self.is_last_step = is_last_step
self.llm = LLMRegistry.get(model_name)
self.prompt_set = PromptSetRegistry.get(domain)
self.role = self.prompt_set.get_role()
self.constraint = self.prompt_set.get_constraint()
@property
def node_name(self):
return self.__class__.__name__
async def _execute(self, inputs: List[Any] = [], **kwargs):
node_inputs = self.process_input(inputs)
outputs = []
for input_dict in node_inputs:
role = self.prompt_set.get_role()
constraint = self.prompt_set.get_constraint()
if self.is_last_step:
system_prompt = (
f"You are {role}. {constraint}. "
"Answer taking into consideration the provided sequence "
"of thoughts on the question at hand.")
else:
system_prompt = (
f"You are {role}. "
"Given the question, solve it step by step. "
"Answer your thoughts about the next step of the solution given "
"everything that has been provided to you so far. "
"Expand on the next step. "
"Do not try to provide the answer straight away, instead expand "
"on your thoughts about the next step of the solution."
"Aswer in maximum 30 words. "
"Do not expect additional input. Make best use of whatever "
"knowledge you have been already provided.")
if 'output' in input_dict:
task = input_dict['output']
else:
task = input_dict["task"]
user_prompt = self.prompt_set.get_answer_prompt(question=task)
message = [
Message(role="system", content=system_prompt),
Message(role="user", content=user_prompt)]
response = await self.llm.agen(message, max_tokens=50)
if self.is_last_step:
concatenated_response = response
else:
concatenated_response = f"{task}. Here is the next thought. {response}. "
execution = {
"operation": self.node_name,
"task": task,
"files": input_dict.get("files", []),
"input": task,
"role": role,
"constraint": constraint,
"prompt": user_prompt,
"output": concatenated_response,
"ground_truth": input_dict.get("GT", []),
"format": "natural language"
}
outputs.append(execution)
self.memory.add(self.id, execution)
return outputs
```
%% Cell type:markdown id: tags:
Then we create a custom Chain-of-Thought agent and register it as CustomCOT in the agent registry.
%% Cell type:code id: tags:
```
from swarm.graph import Graph
from swarm.environment.operations.cot_step import CoTStep
from swarm.environment.agents.agent_registry import AgentRegistry
@AgentRegistry.register('CustomCOT')
class CustomCOT(Graph):
def build_graph(self):
num_thoughts = 3
assert num_thoughts >= 2
thoughts = []
for i_thought in range(num_thoughts):
thought = CoTStep(self.domain,
self.model_name,
is_last_step=i_thought==num_thoughts-1)
if i_thought > 0:
thoughts[-1].add_successor(thought)
thoughts.append(thought)
self.input_nodes = [thoughts[0]]
self.output_nodes = [thoughts[-1]]
for thought in thoughts:
self.add_node(thought)
```
%% Cell type:markdown id: tags:
And finally let's create a Swarm with a couple of our custom agents:
%% Cell type:code id: tags:
```
from swarm.graph.swarm import Swarm
swarm = Swarm(["CustomCOT", "CustomCOT"], "gaia")
task = "What is the text representation of the last digit of twelve squared?"
inputs = {"task": task}
answer = await swarm.arun(inputs)
answer
```
%% Output
2024-02-18 14:25:03.364 | INFO  | swarm.graph.node:log:160 - Memory Records for ID 6HRq:
operation: FinalDecision
files: []
subtask: What is the text representation of the last digit of twelve squared?. Here is the next thought. Calculate twelve squared (12^2), then identify the last digit of the result and convert it to its text representation.. . Here is the next thought. Next step: Compute 12^2 = 144. The last digit is 4. Convert this to its text representation: "four"..
Reference information for CoTStep:
----------------------------------------------
FINAL ANSWER: four
FINAL ANSWER: four
----------------------------------------------
Provide a specific answer. For questions with known answers, ensure to provide accurate and factual responses. Avoid vague responses or statements like 'unable to...' that don't contribute to a definitive answer. For example: if a question asks 'who will be the president of America', and the answer is currently unknown, you could suggest possibilities like 'Donald Trump', or 'Biden'. However, if the answer is known, provide the correct information.
output: FINAL ANSWER: four
['FINAL ANSWER: four']
%% Cell type:code id: tags:
```
```
......
%% Cell type:markdown id: tags:
# Minimal example of running GPTSwarm
%% Cell type:code id: tags:
```
from google.colab import userdata
import os
os.environ['GITHUB_TOKEN'] = userdata.get('GITHUB_TOKEN')
os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
```
%% Cell type:code id: tags:
```
!git clone https://$GITHUB_TOKEN@github.com/mczhuge/GPTSwarm.git
!git clone https://github.com/metauto-ai/GPTSwarm.git
```
%% Cell type:code id: tags:
```
%cd GPTSwarm
```
%% Cell type:code id: tags:
```
!rm requirements_py310_macos.txt
!touch requirements_py310_macos.txt
```
%% Cell type:code id: tags:
```
!pip install -r requirements_colab.txt
```
%% Cell type:code id: tags:
```
!pip install -e .
```
%% Cell type:markdown id: tags:
Here we make a test run of a swarm without OpenAI calls, by using a mock backend.
%% Cell type:code id: tags:
```
from swarm.graph.swarm import Swarm
swarm = Swarm(["IO", "IO", "IO"], "gaia", model_name='mock')
task = "What is the capital of Jordan?"
inputs = {"task": task}
answer = await swarm.arun(inputs)
answer
```
%% Cell type:markdown id: tags:
Here we run swarm inference with ChatGPT4 backend. OPENAI_API_KEY must be specified in Colab secrets at this point.
%% Cell type:code id: tags:
```
from swarm.graph.swarm import Swarm
swarm = Swarm(["IO", "IO", "IO"], "gaia")
task = "What is the capital of Jordan?"
inputs = {"task": task}
answer = await swarm.arun(inputs)
answer
```
%% Cell type:code id: tags:
```
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment