agentscope.formatter

The formatter module in agentscope.

class FormatterBase[源代码]

基类:object

The base class for formatters.

abstract async format(*args, **kwargs)[源代码]

Format the Msg objects to a list of dictionaries that satisfy the API requirements.

参数:
  • args (Any)

  • kwargs (Any)

返回类型:

list[dict[str, Any]]

static assert_list_of_msgs(msgs)[源代码]

Assert that the input is a list of Msg objects.

参数:

msgs (list[Msg]) -- A list of Msg objects to be validated.

返回类型:

None

static convert_tool_result_to_string(output)[源代码]

Turn the tool result list into a textual output to be compatible with the LLM API that doesn't support multimodal data in the tool result.

For URL-based images, the URL is included in the list. For base64-encoded images, the local file path where the image is saved is included in the returned list.

参数:

output (str | List[TextBlock | ImageBlock | AudioBlock | VideoBlock]) -- The output of the tool response, including text and multimodal data like images and audio.

返回:

A tuple containing the textual representation of the tool result and a list of tuples. The first element of each tuple is the local file path or URL of the multimodal data, and the second element is the corresponding block.

返回类型:

tuple[str, list[Tuple[str, ImageBlock | AudioBlock | VideoBlock TextBlock]]]

class TruncatedFormatterBase[源代码]

基类:FormatterBase, ABC

Base class for truncated formatters, which formats input messages into required formats with tokens under a specified limit.

__init__(token_counter=None, max_tokens=None)[源代码]

Initialize the TruncatedFormatterBase.

参数:
  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async format(msgs, **kwargs)[源代码]

Format the input messages into the required format. If token counter and max token limit are provided, the messages will be truncated to fit the limit.

参数:
  • msgs (list[Msg]) -- The input messages to be formatted.

  • kwargs (Any)

返回:

The formatted messages in the required format.

返回类型:

list[dict[str, Any]]

async _format(msgs)[源代码]

Format the input messages into the required format. This method should be implemented by the subclasses.

参数:

msgs (list[Msg])

返回类型:

list[dict[str, Any]]

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the LLM API.

参数:

msgs (list[Msg])

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into the required format for the LLM API.

参数:
  • msgs (list[Msg])

  • is_first (bool)

返回类型:

list[dict[str, Any]]

class DashScopeChatFormatter[源代码]

基类:TruncatedFormatterBase

The DashScope formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]
__init__(promote_tool_result_images=False, promote_tool_result_audios=False, promote_tool_result_videos=False, token_counter=None, max_tokens=None)[源代码]

Initialize the DashScope chat formatter.

参数:
  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_audios (bool, defaults to False) -- Whether to promote audios from tool results to user messages. Most LLM APIs don't support audios in tool result blocks, but do support them in user message blocks. When True, audios are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_videos (bool, defaults to False) -- Whether to promote videos from tool results to user messages. Most LLM APIs don't support videos in tool result blocks, but do support them in user message blocks. When True, videos are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async _format(msgs)[源代码]

Format message objects into DashScope API format.

参数:

msgs (list[Msg]) -- The list of message objects to format.

返回:

The formatted messages as a list of dictionaries.

返回类型:

list[dict[str, Any]]

class DashScopeMultiAgentFormatter[源代码]

基类:TruncatedFormatterBase

DashScope formatter for multi-agent conversations, where more than a user and an agent are involved.

备注

This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.

备注

For tool calls/results, they will be presented as separate messages as required by the DashScope API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.

小技巧

Telling the assistant's name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, promote_tool_result_audios=False, promote_tool_result_videos=False, token_counter=None, max_tokens=None)[源代码]

Initialize the DashScope multi-agent formatter.

参数:
  • conversation_history_prompt (str) -- The prompt to use for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_audios (bool, defaults to False) -- Whether to promote audios from tool results to user messages. Most LLM APIs don't support audios in tool result blocks, but do support them in user message blocks. When True, audios are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_videos (bool, defaults to False) -- Whether to promote videos from tool results to user messages. Most LLM APIs don't support videos in tool result blocks, but do support them in user message blocks. When True, videos are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- The token counter used for truncation.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.

返回类型:

None

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the DashScope API.

参数:

msgs (list[Msg]) -- The list of messages containing tool calls/results to format.

返回:

A list of dictionaries formatted for the DashScope API.

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into a user message with conversation history tags. For the first agent message, it will include the conversation history prompt.

参数:
  • msgs (list[Msg]) -- A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

返回:

A list of dictionaries formatted for the DashScope API.

返回类型:

list[dict[str, Any]]

class OpenAIChatFormatter[源代码]

基类:TruncatedFormatterBase

The OpenAI formatter class for chatbot scenario, where only a user and an agent are involved. We use the name field in OpenAI API to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversation

support_vision: bool = True

Whether support vision models

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

Supported message blocks for OpenAI API

__init__(promote_tool_result_images=False, token_counter=None, max_tokens=None)[源代码]

Initialize the OpenAI chat formatter.

参数:
  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async _format(msgs)[源代码]

Format message objects into OpenAI API required format.

参数:

msgs (list[Msg]) -- The list of Msg objects to format.

返回:

A list of dictionaries, where each dictionary has "name", "role", and "content" keys.

返回类型:

list[dict[str, Any]]

class OpenAIMultiAgentFormatter[源代码]

基类:TruncatedFormatterBase

OpenAI formatter for multi-agent conversations, where more than a user and an agent are involved. .. tip:: This formatter is compatible with OpenAI API and OpenAI-compatible services like vLLM, Azure OpenAI, and others.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversation

support_vision: bool = True

Whether support vision models

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

Supported message blocks for OpenAI API

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, token_counter=None, max_tokens=None)[源代码]

Initialize the OpenAI multi-agent formatter.

参数:
  • conversation_history_prompt (str) -- The prompt to use for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the OpenAI API.

参数:

msgs (list[Msg])

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into the required format for the OpenAI API.

参数:
  • msgs (list[Msg])

  • is_first (bool)

返回类型:

list[dict[str, Any]]

class AnthropicChatFormatter[源代码]

基类:TruncatedFormatterBase

The Anthropic formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

async _format(msgs)[源代码]

Format message objects into Anthropic API format.

参数:

msgs (list[Msg]) -- The list of message objects to format.

返回:

The formatted messages as a list of dictionaries.

返回类型:

list[dict[str, Any]]

备注

Anthropic suggests always passing all previous thinking blocks back to the API in subsequent calls to maintain reasoning continuity. For more details, please refer to Anthropic's documentation.

class AnthropicMultiAgentFormatter[源代码]

基类:TruncatedFormatterBase

Anthropic formatter for multi-agent conversations, where more than a user and an agent are involved.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]

Initialize the DashScope multi-agent formatter.

参数:
  • conversation_history_prompt (str) -- The prompt to use for the conversation history section.

  • token_counter (TokenCounterBase | None)

  • max_tokens (int | None)

返回类型:

None

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the Anthropic API.

参数:

msgs (list[Msg])

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into the required format for the Anthropic API.

参数:
  • msgs (list[Msg])

  • is_first (bool)

返回类型:

list[dict[str, Any]]

class GeminiChatFormatter[源代码]

基类:TruncatedFormatterBase

The Gemini formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

supported_extensions: dict[str, list[str]] = {'audio': ['mp3', 'wav', 'aiff', 'aac', 'ogg', 'flac'], 'image': ['png', 'jpeg', 'webp', 'heic', 'heif'], 'video': ['mp4', 'mpeg', 'mov', 'avi', 'x-flv', 'mpg', 'webm', 'wmv', '3gpp']}
__init__(promote_tool_result_images=False, token_counter=None, max_tokens=None)[源代码]

Initialize the Gemini chat formatter.

参数:
  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async _format(msgs)[源代码]

Format message objects into Gemini API required format.

参数:

msgs (list[Msg])

返回类型:

list[dict]

class GeminiMultiAgentFormatter[源代码]

基类:TruncatedFormatterBase

The multi-agent formatter for Google Gemini API, where more than a user and an agent are involved.

备注

This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.

备注

For tool calls/results, they will be presented as separate messages as required by the Gemini API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.

小技巧

Telling the assistant's name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, token_counter=None, max_tokens=None)[源代码]

Initialize the Gemini multi-agent formatter.

参数:
  • conversation_history_prompt (str) -- The prompt to be used for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- The token counter used for truncation.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.

返回类型:

None

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the Gemini API.

参数:

msgs (list[Msg]) -- The list of messages containing tool calls/results to format.

返回:

A list of dictionaries formatted for the Gemini API.

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into the required format for the Gemini API.

参数:
  • msgs (list[Msg]) -- A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

返回:

A list of dictionaries formatted for the Gemini API.

返回类型:

list[dict[str, Any]]

class OllamaChatFormatter[源代码]

基类:TruncatedFormatterBase

The Ollama formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different participants in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(promote_tool_result_images=False, token_counter=None, max_tokens=None)[源代码]

Initialize the Ollama chat formatter.

参数:
  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async _format(msgs)[源代码]

Format message objects into Ollama API format.

参数:

msgs (list[Msg]) -- The list of message objects to format.

返回:

The formatted messages as a list of dictionaries.

返回类型:

list[dict[str, Any]]

class OllamaMultiAgentFormatter[源代码]

基类:TruncatedFormatterBase

Ollama formatter for multi-agent conversations, where more than a user and an agent are involved.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, token_counter=None, max_tokens=None)[源代码]

Initialize the Ollama multi-agent formatter.

参数:
  • conversation_history_prompt (str) -- The prompt to use for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) -- Whether to promote images from tool results to user messages. Most LLM APIs don't support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) -- The token counter used for truncation.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.

返回类型:

None

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the Ollama API.

参数:

msgs (list[Msg]) -- The list of messages containing tool calls/results to format.

返回:

A list of dictionaries formatted for the Ollama API.

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into the required format for the Ollama API.

参数:
  • msgs (list[Msg]) -- A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

返回:

A list of dictionaries formatted for the ollama API.

返回类型:

list[dict[str, Any]]

class DeepSeekChatFormatter[源代码]

基类:TruncatedFormatterBase

The DeepSeek formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = False

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

async _format(msgs)[源代码]

Format message objects into DeepSeek API format.

参数:

msgs (list[Msg]) -- The list of message objects to format.

返回:

The formatted messages as a list of dictionaries.

返回类型:

list[dict[str, Any]]

class DeepSeekMultiAgentFormatter[源代码]

基类:TruncatedFormatterBase

DeepSeek formatter for multi-agent conversations, where more than a user and an agent are involved.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = False

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[源代码]

Initialize the DeepSeek multi-agent formatter.

参数:
  • conversation_history_prompt (str) -- The prompt to use for the conversation history section.

  • token_counter (TokenCounterBase | None, optional) -- A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) -- The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

返回类型:

None

async _format_tool_sequence(msgs)[源代码]

Given a sequence of tool call/result messages, format them into the required format for the DeepSeek API.

参数:

msgs (list[Msg]) -- The list of messages containing tool calls/results to format.

返回:

A list of dictionaries formatted for the DeepSeek API.

返回类型:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[源代码]

Given a sequence of messages without tool calls/results, format them into the required format for the DeepSeek API.

参数:
  • msgs (list[Msg]) -- A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) -- Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

返回:

A list of dictionaries formatted for the DeepSeek API.

返回类型:

list[dict[str, Any]]