You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem:
I'm building a multi-agent workflow (Ollama backend, Python, autogen) where:
The first agent (ExtractorAgent) extracts eligibility criteria from text using a function call (extract_eligibility).
The second agent (MathAgent) is supposed to receive the extracted labels as part of a function call (generate_math_equations) and process them.
A PlannerAgent (UserProxyAgent) routes between them in a GroupChat with speaker_selection_method="auto".
However:
After ExtractorAgent returns its function result, MathAgent never receives a proper function call message. Instead, it always receives a chat message (role: "user"), not a function call (role: "function"). As a result, MathAgent just says “I’m ready!” and waits for a function call, but never gets it.
This is the code I have, please help:
class PlannerAgent(UserProxyAgent):
def init(self, *args, candidates=None, model=None, **kwargs):
super().init(*args, **kwargs)
self.candidates = candidates
self.model = model
def receive(self, message, sender, *args, **kwargs):
print("PlannerAgent received:", message)
# Step 1: user prompt, kick off ExtractorAgent
if message.get("role") == "user":
return {
"function_call": {
"name": "extract_eligibility",
"arguments": json.dumps({
"candidates": self.candidates,
"model": self.model
})
}
}
# Step 2: response from ExtractorAgent, call MathAgent
if message.get("role") == "function" and message.get("name") == "extract_eligibility":
content = message.get("content")
labels = json.loads(content) if isinstance(content, str) else content
return {
"function_call": {
"name": "generate_math_equations",
"arguments": json.dumps({"labels": labels})
}
}
# Step 3: response from MathAgent, return as final result
if message.get("role") == "function" and message.get("name") == "generate_math_equations":
return message
# Fallback
return super().receive(message, sender, *args, **kwargs)
def multi_agents(candidates, model="llama3"):
extractor_agent = AssistantAgent(
name="ExtractorAgent",
llm_config=llm_config,
system_message="""
You are an eligibility-criteria extraction agent.
If you receive a function call message with 'name' == 'extract_eligibility', call the function with the given arguments.
Do NOT emit any function call in response to function results or other message types.
Do NOT emit any function call if you already called the function.
Only call the function when explicitly told to.
Format:
{
"function_call": {
"name": "extract_eligibility",
"arguments": {"candidates": <provided>, "model": <provided>}
}
}
Do NOT include any other text or code.
""".strip(),
function_map={"extract_eligibility": extract_eligibility_from_candidates_help},
)
math_agent = AssistantAgent(
name="MathAgent",
llm_config=llm_config,
system_message="""
You are a math-equation generation agent.
Only call the function 'generate_math_equations' when you receive a function_call message for it.
Do NOT respond to plain chat messages or any other message types.
Only call the function when explicitly told to.
Format:
{
"function_call": {
"name": "generate_math_equations",
"arguments": {"candidates": }
}
}
""",
function_map={"generate_math_equations": generate_math_equations_help}
)
planner = PlannerAgent(
name="Planner",
llm_config=llm_config,
human_input_mode="NEVER",
candidates=candidates,
model=model,
# system_message can be left blank or short
)
# 1) Create the group chat, listing your agents in the order you’d like them to participate
groupchat = GroupChat(
agents=[ planner, extractor_agent, math_agent],
messages=[],
max_round=5,
speaker_selection_method="auto"
)
# 2) Wrap it in a manager using your existing llm_config
manager = GroupChatManager( groupchat=groupchat, llm_config=llm_config)
final_response = planner.initiate_chat(
manager,
message={
"function_call": {
"name": "extract_eligibility",
"arguments": json.dumps({
"candidates": candidates,
"model": model
})
}
}
)
manager.run()
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Problem:
I'm building a multi-agent workflow (Ollama backend, Python, autogen) where:
The first agent (ExtractorAgent) extracts eligibility criteria from text using a function call (extract_eligibility).
The second agent (MathAgent) is supposed to receive the extracted labels as part of a function call (generate_math_equations) and process them.
A PlannerAgent (UserProxyAgent) routes between them in a GroupChat with speaker_selection_method="auto".
However:
After ExtractorAgent returns its function result, MathAgent never receives a proper function call message. Instead, it always receives a chat message (role: "user"), not a function call (role: "function"). As a result, MathAgent just says “I’m ready!” and waits for a function call, but never gets it.
This is the code I have, please help:
class PlannerAgent(UserProxyAgent):
def init(self, *args, candidates=None, model=None, **kwargs):
super().init(*args, **kwargs)
self.candidates = candidates
self.model = model
def multi_agents(candidates, model="llama3"):
You are a math-equation generation agent.
Only call the function 'generate_math_equations' when you receive a function_call message for it.
Do NOT respond to plain chat messages or any other message types.
Only call the function when explicitly told to.
Format:
{
"function_call": {
"name": "generate_math_equations",
"arguments": {"candidates": }
}
}
""",
function_map={"generate_math_equations": generate_math_equations_help}
)
Beta Was this translation helpful? Give feedback.
All reactions