agentscope.formatter¶
The formatter module in agentscope.
- class FormatterBase[source]¶
Bases:
object
The base class for formatters.
- abstract async format(*args, **kwargs)[source]¶
Format the Msg objects to a list of dictionaries that satisfy the API requirements.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
list[dict[str, Any]]
- static assert_list_of_msgs(msgs)[source]¶
Assert that the input is a list of Msg objects.
- Parameters:
msgs (list[Msg]) – A list of Msg objects to be validated.
- Return type:
None
- static convert_tool_result_to_string(output)[source]¶
Turn the tool result list into a textual output to be compatible with the LLM API that doesn’t support multimodal data.
- Parameters:
output (List[TextBlock | ImageBlock | AudioBlock]) – The output of the tool response, including text and multimodal data like images and audio.
- Returns:
A string representation of the tool result, with text blocks concatenated and multimodal data represented by file paths or URLs.
- Return type:
str
- class TruncatedFormatterBase[source]¶
Bases:
FormatterBase
,ABC
Base class for truncated formatters, which formats input messages into required formats with tokens under a specified limit.
- __init__(token_counter=None, max_tokens=None)[source]¶
Initialize the TruncatedFormatterBase.
- Parameters:
token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.
max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.
- Return type:
None
- async format(msgs, **kwargs)[source]¶
Format the input messages into the required format. If token counter and max token limit are provided, the messages will be truncated to fit the limit.
- Parameters:
msgs (list[Msg]) – The input messages to be formatted.
kwargs (Any)
- Returns:
The formatted messages in the required format.
- Return type:
list[dict[str, Any]]
- async _format(msgs)[source]¶
Format the input messages into the required format. This method should be implemented by the subclasses.
- Parameters:
msgs (list[Msg])
- Return type:
list[dict[str, Any]]
- class DashScopeChatFormatter[source]¶
Bases:
TruncatedFormatterBase
Formatter for DashScope messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
- class DashScopeMultiAgentFormatter[source]¶
Bases:
TruncatedFormatterBase
DashScope formatter for multi-agent conversations, where more than a user and an agent are involved.
Note
This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.
Note
For tool calls/results, they will be presented as separate messages as required by the DashScope API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.
Tip
Telling the assistant’s name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]¶
Initialize the DashScope multi-agent formatter.
- Parameters:
conversation_history_prompt (str) – The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None, optional) – The token counter used for truncation.
max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.
- Return type:
None
- async _format_tool_sequence(msgs)[source]¶
Given a sequence of tool call/result messages, format them into the required format for the DashScope API.
- Parameters:
msgs (list[Msg]) – The list of messages containing tool calls/results to format.
- Returns:
A list of dictionaries formatted for the DashScope API.
- Return type:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[source]¶
Given a sequence of messages without tool calls/results, format them into a user message with conversation history tags. For the first agent message, it will include the conversation history prompt.
- Parameters:
msgs (list[Msg]) – A list of Msg objects to be formatted.
is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- Returns:
A list of dictionaries formatted for the DashScope API.
- Return type:
list[dict[str, Any]]
- class OpenAIChatFormatter[source]¶
Bases:
TruncatedFormatterBase
The class used to format message objects into the OpenAI API required format.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversation
- support_vision: bool = True¶
Whether support vision models
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
Supported message blocks for OpenAI API
- class OpenAIMultiAgentFormatter[source]¶
Bases:
TruncatedFormatterBase
OpenAI formatter for multi-agent conversations, where more than a user and an agent are involved. .. tip:: This formatter is compatible with OpenAI API and OpenAI-compatible services like vLLM, Azure OpenAI, and others.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversation
- support_vision: bool = True¶
Whether support vision models
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
Supported message blocks for OpenAI API
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]¶
Initialize the OpenAI multi-agent formatter.
- Parameters:
conversation_history_prompt (str) – The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None)
max_tokens (int | None)
- Return type:
None
- class AnthropicChatFormatter[source]¶
Bases:
TruncatedFormatterBase
Formatter for Anthropic messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- async _format(msgs)[source]¶
Format message objects into Anthropic API format.
- Parameters:
msgs (list[Msg]) – The list of message objects to format.
- Returns:
The formatted messages as a list of dictionaries.
- Return type:
list[dict[str, Any]]
Note
Anthropic suggests always passing all previous thinking blocks back to the API in subsequent calls to maintain reasoning continuity. For more details, please refer to Anthropic’s documentation.
- class AnthropicMultiAgentFormatter[source]¶
Bases:
TruncatedFormatterBase
Anthropic formatter for multi-agent conversations, where more than a user and an agent are involved.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]¶
Initialize the DashScope multi-agent formatter.
- Parameters:
conversation_history_prompt (str) – The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None)
max_tokens (int | None)
- Return type:
None
- class GeminiChatFormatter[source]¶
Bases:
TruncatedFormatterBase
The formatter for Google Gemini API.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- supported_extensions: list[str] = {'audio': ['mp3', 'wav', 'aiff', 'aac', 'ogg', 'flac'], 'image': ['png', 'jpeg', 'webp', 'heic', 'heif'], 'video': ['mp4', 'mpeg', 'mov', 'avi', 'x-flv', 'mpg', 'webm', 'wmv', '3gpp']}¶
- class GeminiMultiAgentFormatter[source]¶
Bases:
TruncatedFormatterBase
The multi-agent formatter for Google Gemini API, where more than a user and an agent are involved.
Note
This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.
Note
For tool calls/results, they will be presented as separate messages as required by the Gemini API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.
Tip
Telling the assistant’s name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]¶
Initialize the Gemini multi-agent formatter.
- Parameters:
conversation_history_prompt (str) – The prompt to be used for the conversation history section.
token_counter (TokenCounterBase | None, optional) – The token counter used for truncation.
max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.
- Return type:
None
- async _format_tool_sequence(msgs)[source]¶
Given a sequence of tool call/result messages, format them into the required format for the Gemini API.
- Parameters:
msgs (list[Msg]) – The list of messages containing tool calls/results to format.
- Returns:
A list of dictionaries formatted for the Gemini API.
- Return type:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[source]¶
Given a sequence of messages without tool calls/results, format them into the required format for the Gemini API.
- Parameters:
msgs (list[Msg]) – A list of Msg objects to be formatted.
is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- Returns:
A list of dictionaries formatted for the Gemini API.
- Return type:
list[dict[str, Any]]
- class OllamaChatFormatter[source]¶
Bases:
TruncatedFormatterBase
Formatter for Ollama messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- class OllamaMultiAgentFormatter[source]¶
Bases:
TruncatedFormatterBase
Ollama formatter for multi-agent conversations, where more than a user and an agent are involved.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = True¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]¶
Initialize the Ollama multi-agent formatter.
- Parameters:
conversation_history_prompt (str) – The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None, optional) – The token counter used for truncation.
max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.
- Return type:
None
- async _format_tool_sequence(msgs)[source]¶
Given a sequence of tool call/result messages, format them into the required format for the Ollama API.
- Parameters:
msgs (list[Msg]) – The list of messages containing tool calls/results to format.
- Returns:
A list of dictionaries formatted for the Ollama API.
- Return type:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[source]¶
Given a sequence of messages without tool calls/results, format them into the required format for the Ollama API.
- Parameters:
msgs (list[Msg]) – A list of Msg objects to be formatted.
is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- Returns:
A list of dictionaries formatted for the ollama API.
- Return type:
list[dict[str, Any]]
- class DeepSeekChatFormatter[source]¶
Bases:
TruncatedFormatterBase
Formatter for DeepSeek messages.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = False¶
Whether support multi-agent conversations
- support_vision: bool = False¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- class DeepSeekMultiAgentFormatter[source]¶
Bases:
TruncatedFormatterBase
DeepSeek formatter for multi-agent conversations, where more than a user and an agent are involved.
- support_tools_api: bool = True¶
Whether support tools API
- support_multiagent: bool = True¶
Whether support multi-agent conversations
- support_vision: bool = False¶
Whether support vision data
- supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]¶
The list of supported message blocks
- __init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]¶
Initialize the DeepSeek multi-agent formatter.
- Parameters:
conversation_history_prompt (str) – The prompt to use for the conversation history section.
token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.
max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.
- Return type:
None
- async _format_tool_sequence(msgs)[source]¶
Given a sequence of tool call/result messages, format them into the required format for the DeepSeek API.
- Parameters:
msgs (list[Msg]) – The list of messages containing tool calls/results to format.
- Returns:
A list of dictionaries formatted for the DeepSeek API.
- Return type:
list[dict[str, Any]]
- async _format_agent_message(msgs, is_first=True)[source]¶
Given a sequence of messages without tool calls/results, format them into the required format for the DeepSeek API.
- Parameters:
msgs (list[Msg]) – A list of Msg objects to be formatted.
is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.
- Returns:
A list of dictionaries formatted for the DeepSeek API.
- Return type:
list[dict[str, Any]]