agentscope.formatter¶
The formatter module in agentscope.
- class FormatterBase[源代码]¶
基类:
object
The base class for formatters.
- abstract async format(*args, **kwargs)[源代码]¶
Format the Msg objects to a list of dictionaries that satisfy the API requirements.
- 参数:
args (Any)
kwargs (Any)
- 返回类型:
list[dict[str, Any]]
- static assert_list_of_msgs(msgs)[源代码]¶
Assert that the input is a list of Msg objects.
- 参数:
msgs (list[Msg]) -- A list of Msg objects to be validated.
- 返回类型:
None
- static convert_tool_result_to_string(output)[源代码]¶
Turn the tool result list into a textual output to be compatible with the LLM API that doesn't support multimodal data.
- 参数:
output (str | List[TextBlock | ImageBlock | AudioBlock]) -- The output of the tool response, including text and multimodal data like images and audio.
- 返回:
A string representation of the tool result, with text blocks concatenated and multimodal data represented by file paths or URLs.
- 返回类型:
str
- class TruncatedFormatterBase[源代码]¶
基类:
FormatterBase
,ABC
Base class for truncated formatters, which formats input messages into required formats with tokens under a specified limit.
- __init__(token_counter=None, max_tokens=None)[源代码]¶
Initialize the TruncatedFormatterBase.
- 参数:
token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.
max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.
- 返回类型:
None
- async format(msgs, **kwargs)[源代码]¶
Format the input messages into the required format. If token counter and max token limit are provided, the messages will be truncated to fit the limit.
- 参数:
msgs (list[Msg]) -- The input messages to be formatted.
kwargs (Any)
- 返回:
The formatted messages in the required format.
- 返回类型:
list[dict[str, Any]]
- async _format(msgs)[源代码]¶
Format the input messages into the required format. This method should be implemented by the subclasses.
- 参数:
msgs (list[Msg])
- 返回类型:
list[dict[str, Any]]
- class DashScopeChatFormatter[源代码]¶
-
Formatter for DashScope messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
- class DashScopeMultiAgentFormatter[源代码]¶
-
DashScope formatter for multi-agent conversations, where more than a user and an agent are involved.
备注
This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.
备注
For tool calls/results, they will be presented as separate messages as required by the DashScope API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.
小技巧
Telling the assistant's name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]¶
Initialize the DashScope multi-agent formatter.
- 参数:
conversation_history_prompt (str) -- The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None, optional) -- The token counter used for truncation.
max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.
- 返回类型:
None
- async _format_tool_sequence(msgs)[源代码]¶
Given a sequence of tool call/result messages, format them into the required format for the DashScope API.
- 参数:
msgs (list[Msg]) -- The list of messages containing tool calls/results to format.
- 返回:
A list of dictionaries formatted for the DashScope API.
- 返回类型:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[源代码]¶
Given a sequence of messages without tool calls/results, format them into a user message with conversation history tags. For the first agent message, it will include the conversation history prompt.
- 参数:
msgs (list[Msg]) -- A list of Msg objects to be formatted.
is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- 返回:
A list of dictionaries formatted for the DashScope API.
- 返回类型:
list[dict[str, Any]]
- class OpenAIChatFormatter[源代码]¶
-
The class used to format message objects into the OpenAI API required format.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversation
- support_vision: bool = True¶
Whether support vision models
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
Supported message blocks for OpenAI API
- class OpenAIMultiAgentFormatter[源代码]¶
-
OpenAI formatter for multi-agent conversations, where more than a user and an agent are involved. .. tip:: This formatter is compatible with OpenAI API and OpenAI-compatible services like vLLM, Azure OpenAI, and others.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversation
- support_vision: bool = True¶
Whether support vision models
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
Supported message blocks for OpenAI API
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]¶
Initialize the OpenAI multi-agent formatter.
- 参数:
conversation_history_prompt (str) -- The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None)
max_tokens (int | None)
- 返回类型:
None
- class AnthropicChatFormatter[源代码]¶
-
Formatter for Anthropic messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- async _format(msgs)[源代码]¶
Format message objects into Anthropic API format.
- 参数:
msgs (list[Msg]) -- The list of message objects to format.
- 返回:
The formatted messages as a list of dictionaries.
- 返回类型:
list[dict[str, Any]]
备注
Anthropic suggests always passing all previous thinking blocks back to the API in subsequent calls to maintain reasoning continuity. For more details, please refer to Anthropic's documentation.
- class AnthropicMultiAgentFormatter[源代码]¶
-
Anthropic formatter for multi-agent conversations, where more than a user and an agent are involved.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]¶
Initialize the DashScope multi-agent formatter.
- 参数:
conversation_history_prompt (str) -- The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None)
max_tokens (int | None)
- 返回类型:
None
- class GeminiChatFormatter[源代码]¶
-
The formatter for Google Gemini API.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- supported_extensions: list[str] = {'audio': ['mp3', 'wav', 'aiff', 'aac', 'ogg', 'flac'], 'image': ['png', 'jpeg', 'webp', 'heic', 'heif'], 'video': ['mp4', 'mpeg', 'mov', 'avi', 'x-flv', 'mpg', 'webm', 'wmv', '3gpp']}¶
- class GeminiMultiAgentFormatter[源代码]¶
-
The multi-agent formatter for Google Gemini API, where more than a user and an agent are involved.
备注
This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.
备注
For tool calls/results, they will be presented as separate messages as required by the Gemini API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.
小技巧
Telling the assistant's name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]¶
Initialize the Gemini multi-agent formatter.
- 参数:
conversation_history_prompt (str) -- The prompt to be used for the conversation history section.
token_counter (TokenCounterBase | None, optional) -- The token counter used for truncation.
max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.
- 返回类型:
None
- async _format_tool_sequence(msgs)[源代码]¶
Given a sequence of tool call/result messages, format them into the required format for the Gemini API.
- 参数:
msgs (list[Msg]) -- The list of messages containing tool calls/results to format.
- 返回:
A list of dictionaries formatted for the Gemini API.
- 返回类型:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[源代码]¶
Given a sequence of messages without tool calls/results, format them into the required format for the Gemini API.
- 参数:
msgs (list[Msg]) -- A list of Msg objects to be formatted.
is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- 返回:
A list of dictionaries formatted for the Gemini API.
- 返回类型:
list[dict[str, Any]]
- class OllamaChatFormatter[源代码]¶
-
Formatter for Ollama messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- class OllamaMultiAgentFormatter[源代码]¶
-
Ollama formatter for multi-agent conversations, where more than a user and an agent are involved.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]¶
Initialize the Ollama multi-agent formatter.
- 参数:
conversation_history_prompt (str) -- The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None, optional) -- The token counter used for truncation.
max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.
- 返回类型:
None
- async _format_tool_sequence(msgs)[源代码]¶
Given a sequence of tool call/result messages, format them into the required format for the Ollama API.
- 参数:
msgs (list[Msg]) -- The list of messages containing tool calls/results to format.
- 返回:
A list of dictionaries formatted for the Ollama API.
- 返回类型:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[源代码]¶
Given a sequence of messages without tool calls/results, format them into the required format for the Ollama API.
- 参数:
msgs (list[Msg]) -- A list of Msg objects to be formatted.
is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- 返回:
A list of dictionaries formatted for the ollama API.
- 返回类型:
list[dict[str, Any]]
- class DeepSeekChatFormatter[源代码]¶
-
Formatter for DeepSeek messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = False¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- class DeepSeekMultiAgentFormatter[源代码]¶
-
DeepSeek formatter for multi-agent conversations, where more than a user and an agent are involved.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = False¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]¶
Initialize the DeepSeek multi-agent formatter.
- 参数:
conversation_history_prompt (str) -- The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.
max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.
- 返回类型:
None
- async _format_tool_sequence(msgs)[源代码]¶
Given a sequence of tool call/result messages, format them into the required format for the DeepSeek API.
- 参数:
msgs (list[Msg]) -- The list of messages containing tool calls/results to format.
- 返回:
A list of dictionaries formatted for the DeepSeek API.
- 返回类型:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[源代码]¶
Given a sequence of messages without tool calls/results, format them into the required format for the DeepSeek API.
- 参数:
msgs (list[Msg]) -- A list of Msg objects to be formatted.
is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- 返回:
A list of dictionaries formatted for the DeepSeek API.
- 返回类型:
list[dict[str, Any]]