agentscope.formatter

The formatter module in agentscope.

class FormatterBase[source]

Bases: object

The base class for formatters.

abstract async format(*args, **kwargs)[source]

Format the Msg objects to a list of dictionaries that satisfy the API requirements.

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

list[dict[str, Any]]

static assert_list_of_msgs(msgs)[source]

Assert that the input is a list of Msg objects.

Parameters:

msgs (list[Msg]) – A list of Msg objects to be validated.

Return type:

None

static convert_tool_result_to_string(output)[source]

Turn the tool result list into a textual output to be compatible with the LLM API that doesn’t support multimodal data in the tool result.

For URL-based images, the URL is included in the list. For base64-encoded images, the local file path where the image is saved is included in the returned list.

Parameters:

output (str | List[TextBlock | ImageBlock | AudioBlock | VideoBlock]) – The output of the tool response, including text and multimodal data like images and audio.

Returns:

A tuple containing the textual representation of the tool result and a list of tuples. The first element of each tuple is the local file path or URL of the multimodal data, and the second element is the corresponding block.

Return type:

tuple[str, list[Tuple[str, ImageBlock | AudioBlock | VideoBlock TextBlock]]]

class TruncatedFormatterBase[source]

Bases: FormatterBase, ABC

Base class for truncated formatters, which formats input messages into required formats with tokens under a specified limit.

__init__(token_counter=None, max_tokens=None)[source]

Initialize the TruncatedFormatterBase.

Parameters:
  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async format(msgs, **kwargs)[source]

Format the input messages into the required format. If token counter and max token limit are provided, the messages will be truncated to fit the limit.

Parameters:
  • msgs (list[Msg]) – The input messages to be formatted.

  • kwargs (Any)

Returns:

The formatted messages in the required format.

Return type:

list[dict[str, Any]]

async _format(msgs)[source]

Format the input messages into the required format. This method should be implemented by the subclasses.

Parameters:

msgs (list[Msg])

Return type:

list[dict[str, Any]]

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the LLM API.

Parameters:

msgs (list[Msg])

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into the required format for the LLM API.

Parameters:
  • msgs (list[Msg])

  • is_first (bool)

Return type:

list[dict[str, Any]]

class DashScopeChatFormatter[source]

Bases: TruncatedFormatterBase

The DashScope formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]
__init__(promote_tool_result_images=False, promote_tool_result_audios=False, promote_tool_result_videos=False, token_counter=None, max_tokens=None)[source]

Initialize the DashScope chat formatter.

Parameters:
  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_audios (bool, defaults to False) – Whether to promote audios from tool results to user messages. Most LLM APIs don’t support audios in tool result blocks, but do support them in user message blocks. When True, audios are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_videos (bool, defaults to False) – Whether to promote videos from tool results to user messages. Most LLM APIs don’t support videos in tool result blocks, but do support them in user message blocks. When True, videos are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async _format(msgs)[source]

Format message objects into DashScope API format.

Parameters:

msgs (list[Msg]) – The list of message objects to format.

Returns:

The formatted messages as a list of dictionaries.

Return type:

list[dict[str, Any]]

class DashScopeMultiAgentFormatter[source]

Bases: TruncatedFormatterBase

DashScope formatter for multi-agent conversations, where more than a user and an agent are involved.

Note

This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.

Note

For tool calls/results, they will be presented as separate messages as required by the DashScope API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.

Tip

Telling the assistant’s name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, promote_tool_result_audios=False, promote_tool_result_videos=False, token_counter=None, max_tokens=None)[source]

Initialize the DashScope multi-agent formatter.

Parameters:
  • conversation_history_prompt (str) – The prompt to use for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_audios (bool, defaults to False) – Whether to promote audios from tool results to user messages. Most LLM APIs don’t support audios in tool result blocks, but do support them in user message blocks. When True, audios are extracted and appended as a separate user message with explanatory text indicating their source.

  • promote_tool_result_videos (bool, defaults to False) – Whether to promote videos from tool results to user messages. Most LLM APIs don’t support videos in tool result blocks, but do support them in user message blocks. When True, videos are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – The token counter used for truncation.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.

Return type:

None

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the DashScope API.

Parameters:

msgs (list[Msg]) – The list of messages containing tool calls/results to format.

Returns:

A list of dictionaries formatted for the DashScope API.

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into a user message with conversation history tags. For the first agent message, it will include the conversation history prompt.

Parameters:
  • msgs (list[Msg]) – A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

Returns:

A list of dictionaries formatted for the DashScope API.

Return type:

list[dict[str, Any]]

class OpenAIChatFormatter[source]

Bases: TruncatedFormatterBase

The OpenAI formatter class for chatbot scenario, where only a user and an agent are involved. We use the name field in OpenAI API to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversation

support_vision: bool = True

Whether support vision models

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

Supported message blocks for OpenAI API

__init__(promote_tool_result_images=False, token_counter=None, max_tokens=None)[source]

Initialize the OpenAI chat formatter.

Parameters:
  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async _format(msgs)[source]

Format message objects into OpenAI API required format.

Parameters:

msgs (list[Msg]) – The list of Msg objects to format.

Returns:

A list of dictionaries, where each dictionary has “name”, “role”, and “content” keys.

Return type:

list[dict[str, Any]]

class OpenAIMultiAgentFormatter[source]

Bases: TruncatedFormatterBase

OpenAI formatter for multi-agent conversations, where more than a user and an agent are involved. .. tip:: This formatter is compatible with OpenAI API and OpenAI-compatible services like vLLM, Azure OpenAI, and others.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversation

support_vision: bool = True

Whether support vision models

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

Supported message blocks for OpenAI API

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, token_counter=None, max_tokens=None)[source]

Initialize the OpenAI multi-agent formatter.

Parameters:
  • conversation_history_prompt (str) – The prompt to use for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the OpenAI API.

Parameters:

msgs (list[Msg])

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into the required format for the OpenAI API.

Parameters:
  • msgs (list[Msg])

  • is_first (bool)

Return type:

list[dict[str, Any]]

class AnthropicChatFormatter[source]

Bases: TruncatedFormatterBase

The Anthropic formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

async _format(msgs)[source]

Format message objects into Anthropic API format.

Parameters:

msgs (list[Msg]) – The list of message objects to format.

Returns:

The formatted messages as a list of dictionaries.

Return type:

list[dict[str, Any]]

Note

Anthropic suggests always passing all previous thinking blocks back to the API in subsequent calls to maintain reasoning continuity. For more details, please refer to Anthropic’s documentation.

class AnthropicMultiAgentFormatter[source]

Bases: TruncatedFormatterBase

Anthropic formatter for multi-agent conversations, where more than a user and an agent are involved.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]

Initialize the DashScope multi-agent formatter.

Parameters:
  • conversation_history_prompt (str) – The prompt to use for the conversation history section.

  • token_counter (TokenCounterBase | None)

  • max_tokens (int | None)

Return type:

None

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the Anthropic API.

Parameters:

msgs (list[Msg])

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into the required format for the Anthropic API.

Parameters:
  • msgs (list[Msg])

  • is_first (bool)

Return type:

list[dict[str, Any]]

class GeminiChatFormatter[source]

Bases: TruncatedFormatterBase

The Gemini formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

supported_extensions: dict[str, list[str]] = {'audio': ['mp3', 'wav', 'aiff', 'aac', 'ogg', 'flac'], 'image': ['png', 'jpeg', 'webp', 'heic', 'heif'], 'video': ['mp4', 'mpeg', 'mov', 'avi', 'x-flv', 'mpg', 'webm', 'wmv', '3gpp']}
__init__(promote_tool_result_images=False, token_counter=None, max_tokens=None)[source]

Initialize the Gemini chat formatter.

Parameters:
  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async _format(msgs)[source]

Format message objects into Gemini API required format.

Parameters:

msgs (list[Msg])

Return type:

list[dict]

class GeminiMultiAgentFormatter[source]

Bases: TruncatedFormatterBase

The multi-agent formatter for Google Gemini API, where more than a user and an agent are involved.

Note

This formatter will combine previous messages (except tool calls/results) into a history section in the first system message with the conversation history prompt.

Note

For tool calls/results, they will be presented as separate messages as required by the Gemini API. Therefore, the tool calls/ results messages are expected to be placed at the end of the input messages.

Tip

Telling the assistant’s name in the system prompt is very important in multi-agent conversations. So that LLM can know who it is playing as.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.VideoBlock'>, <class 'agentscope.message._message_block.AudioBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, token_counter=None, max_tokens=None)[source]

Initialize the Gemini multi-agent formatter.

Parameters:
  • conversation_history_prompt (str) – The prompt to be used for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – The token counter used for truncation.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.

Return type:

None

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the Gemini API.

Parameters:

msgs (list[Msg]) – The list of messages containing tool calls/results to format.

Returns:

A list of dictionaries formatted for the Gemini API.

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into the required format for the Gemini API.

Parameters:
  • msgs (list[Msg]) – A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

Returns:

A list of dictionaries formatted for the Gemini API.

Return type:

list[dict[str, Any]]

class OllamaChatFormatter[source]

Bases: TruncatedFormatterBase

The Ollama formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different participants in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(promote_tool_result_images=False, token_counter=None, max_tokens=None)[source]

Initialize the Ollama chat formatter.

Parameters:
  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async _format(msgs)[source]

Format message objects into Ollama API format.

Parameters:

msgs (list[Msg]) – The list of message objects to format.

Returns:

The formatted messages as a list of dictionaries.

Return type:

list[dict[str, Any]]

class OllamaMultiAgentFormatter[source]

Bases: TruncatedFormatterBase

Ollama formatter for multi-agent conversations, where more than a user and an agent are involved.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = True

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ImageBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', promote_tool_result_images=False, token_counter=None, max_tokens=None)[source]

Initialize the Ollama multi-agent formatter.

Parameters:
  • conversation_history_prompt (str) – The prompt to use for the conversation history section.

  • promote_tool_result_images (bool, defaults to False) – Whether to promote images from tool results to user messages. Most LLM APIs don’t support images in tool result blocks, but do support them in user message blocks. When True, images are extracted and appended as a separate user message with explanatory text indicating their source.

  • token_counter (TokenCounterBase | None, optional) – The token counter used for truncation.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If None, no truncation will be applied.

Return type:

None

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the Ollama API.

Parameters:

msgs (list[Msg]) – The list of messages containing tool calls/results to format.

Returns:

A list of dictionaries formatted for the Ollama API.

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into the required format for the Ollama API.

Parameters:
  • msgs (list[Msg]) – A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

Returns:

A list of dictionaries formatted for the ollama API.

Return type:

list[dict[str, Any]]

class DeepSeekChatFormatter[source]

Bases: TruncatedFormatterBase

The DeepSeek formatter class for chatbot scenario, where only a user and an agent are involved. We use the role field to identify different entities in the conversation.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = False

Whether support multi-agent conversations

support_vision: bool = False

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

async _format(msgs)[source]

Format message objects into DeepSeek API format.

Parameters:

msgs (list[Msg]) – The list of message objects to format.

Returns:

The formatted messages as a list of dictionaries.

Return type:

list[dict[str, Any]]

class DeepSeekMultiAgentFormatter[source]

Bases: TruncatedFormatterBase

DeepSeek formatter for multi-agent conversations, where more than a user and an agent are involved.

support_tools_api: bool = True

Whether support tools API

support_multiagent: bool = True

Whether support multi-agent conversations

support_vision: bool = False

Whether support vision data

supported_blocks: list[type] = [<class 'agentscope.message._message_block.TextBlock'>, <class 'agentscope.message._message_block.ToolUseBlock'>, <class 'agentscope.message._message_block.ToolResultBlock'>]

The list of supported message blocks

__init__(conversation_history_prompt='# Conversation History\nThe content between <history></history> tags contains your conversation history\n', token_counter=None, max_tokens=None)[source]

Initialize the DeepSeek multi-agent formatter.

Parameters:
  • conversation_history_prompt (str) – The prompt to use for the conversation history section.

  • token_counter (TokenCounterBase | None, optional) – A token counter instance used to count tokens in the messages. If not provided, the formatter will format the messages without considering token limits.

  • max_tokens (int | None, optional) – The maximum number of tokens allowed in the formatted messages. If not provided, the formatter will not truncate the messages.

Return type:

None

async _format_tool_sequence(msgs)[source]

Given a sequence of tool call/result messages, format them into the required format for the DeepSeek API.

Parameters:

msgs (list[Msg]) – The list of messages containing tool calls/results to format.

Returns:

A list of dictionaries formatted for the DeepSeek API.

Return type:

list[dict[str, Any]]

async _format_agent_message(msgs, is_first=True)[source]

Given a sequence of messages without tool calls/results, format them into the required format for the DeepSeek API.

Parameters:
  • msgs (list[Msg]) – A list of Msg objects to be formatted.

  • is_first (bool, defaults to True) – Whether this is the first agent message in the conversation. If True, the conversation history prompt will be included.

Returns:

A list of dictionaries formatted for the DeepSeek API.

Return type:

list[dict[str, Any]]