agentscope.models

Import modules in models package.

class AnthropicChatWrapper(model_name: str, config_name: str | None = None, api_key: str | None = None, stream: bool = False, client_kwargs: dict | None = None)[source]

Bases: ModelWrapperBase

The Anthropic model wrapper for AgentScope.

format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) list[dict[str, object]][source]

Format the messages for anthropic model input.

Parameters:
  • *args (Union[Msg, list[Msg], None]) – The message(s) to be formatted. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

A list of formatted messages.

Return type:

list[dict[str, object]]

format_tools_json_schemas(schemas: dict[str, dict]) list[dict][source]

Format the JSON schemas of the tool functions to the format that the model API provider expects.

Example

An example of the input schemas parsed from the service toolkit

..code-block:: json

{
“bing_search”: {

“type”: “function”, “function”: {

“name”: “bing_search”, “description”: “Search the web using Bing.”, “parameters”: {

“type”: “object”, “properties”: {

“query”: {

“type”: “string”, “description”: “The search query.”,

}

}, “required”: [“query”],

}

}

}

}

Parameters:

schemas (dict[str, dict]) – The tools JSON schemas parsed from the service toolkit module, which can be accessed by service_toolkit.json_schemas.

Returns:

The formatted JSON schemas of the tool functions.

Return type:

list[dict]

model_type: str = 'anthropic_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class DashScopeChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: DashScopeWrapperBase

The model wrapper for DashScope’s chat API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/api-details

Example Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/quick-start?spm=a2c4g.11186623.0.0.7e346eb5RvirBw

{
    "status_code": 200,
    "request_id": "a75a1b22-e512-957d-891b-37db858ae738",
    "code": "",
    "message": "",
    "output": {
        "text": null,
        "finish_reason": null,
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": "xxx"
                }
            }
        ]
    },
    "usage": {
        "input_tokens": 25,
        "output_tokens": 77,
        "total_tokens": 102
    }
}
format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None object will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

List[dict]

format_tools_json_schemas(schemas: dict[str, dict]) list[dict][source]

Format the JSON schemas of the tool functions to the format that the model API provider expects.

Example

An example of the input schemas parsed from the service toolkit

..code-block:: json

{
“bing_search”: {

“type”: “function”, “function”: {

“name”: “bing_search”, “description”: “Search the web using Bing.”, “parameters”: {

“type”: “object”, “properties”: {

“query”: {

“type”: “string”, “description”: “The search query.”,

}

}, “required”: [“query”],

}

}

}

}

Parameters:

schemas (dict[str, dict]) – The tools JSON schemas parsed from the service toolkit module, which can be accessed by service_toolkit.json_schemas.

Returns:

The formatted JSON schemas of the tool functions.

Return type:

list[dict]

model_type: str = 'dashscope_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class DashScopeImageSynthesisWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: DashScopeWrapperBase

The model wrapper for DashScope Image Synthesis API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/quick-start-1

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/api-details-9?spm=a2c4g.11186623.0.0.7108fa70Op6eqF

{
    "status_code": 200,
    "request_id": "b54ffeb8-6212-9dac-808c-b3771cba3788",
    "code": null,
    "message": "",
    "output": {
        "task_id": "996523eb-034d-459b-ac88-b340b95007a4",
        "task_status": "SUCCEEDED",
        "results": [
            {
                "url": "RESULT_URL1"
            },
            {
                "url": "RESULT_URL2"
            },
        ],
        "task_metrics": {
            "TOTAL": 2,
            "SUCCEEDED": 2,
            "FAILED": 0
        }
    },
    "usage": {
        "image_count": 2
    }
}
model_type: str = 'dashscope_image_synthesis'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class DashScopeMultiModalWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: DashScopeWrapperBase

The model wrapper for DashScope Multimodal API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-api

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api?spm=a2c4g.11186623.0.0.7fde1f5atQSalN

{
    "status_code": 200,
    "request_id": "a0dc436c-2ee7-93e0-9667-c462009dec4d",
    "code": "",
    "message": "",
    "output": {
        "text": null,
        "finish_reason": null,
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": [
                        {
                            "text": "这张图片显..."
                        }
                    ]
                }
            }
        ]
    },
    "usage": {
        "input_tokens": 1277,
        "output_tokens": 81,
        "image_tokens": 1247
    }
}
format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) list[dict][source]

Format the messages for DashScope Multimodal API.

The multimodal API has the following requirements:

  • The roles of messages must alternate between “user” and “assistant”.

  • The message with the role “system” should be the first message

in the list.

  • If the system message exists, then the second message must

have the role “user”.

  • The last message in the list should have the role “user”.

  • In each message, more than one figure is allowed.

With the above requirements, we format the messages as follows:

  • If the first message is a system message, then we will keep it as

system prompt.

  • We merge all messages into a conversation history prompt in a

single message with the role “user”. - When there are multiple figures in the given messages, we will

attach it to the user message by order. Note if there are multiple figures, this strategy may cause misunderstanding for the model. For advanced solutions, developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt = model.format(
    Msg(
        "system",
        "You're a helpful assistant",
        role="system", url="figure1"
    ),
    Msg(
        "Bob",
        "How about this picture?",
        role="assistant", url="figure2"
    ),
    Msg(
        "user",
        "It's wonderful! How about mine?",
        role="user", image="figure3"
    )
)

The prompt will be as follows:

[
    {
        "role": "system",
        "content": [
            {"text": "You are a helpful assistant"},
            {"image": "figure1"}
        ]
    },
    {
        "role": "user",
        "content": [
            {"image": "figure2"},
            {"image": "figure3"},
            {
                "text": (
                    "## Conversation History\n"
                    "Bob: How about this picture?\n"
                    "user: It's wonderful! How about mine?"
                )
            },
        ]
    }
]

Note

In multimodal API, the url of local files should be prefixed with “file://”, which will be attached in this format function.

Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

list[dict]

model_type: str = 'dashscope_multimodal'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class DashScopeTextEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: DashScopeWrapperBase

The model wrapper for DashScope Text Embedding API.

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/text-embedding-api-details?spm=a2c4g.11186623.0.i3

{
    "status_code": 200, // 200 indicate success otherwise failed.
    "request_id": "fd564688-43f7-9595-b986", // The request id.
    "code": "", // If failed, the error code.
    "message": "", // If failed, the error message.
    "output": {
        "embeddings": [ // embeddings
            {
                "embedding": [ // one embedding output
                    -3.8450357913970947, ...,
                ],
                "text_index": 0 // the input index.
            }
        ]
    },
    "usage": {
        "total_tokens": 3 // the request tokens.
    }
}
model_type: str = 'dashscope_text_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class GeminiChatWrapper(config_name: str, model_name: str, api_key: str | None = None, stream: bool = False, **kwargs: Any)[source]

Bases: GeminiWrapperBase

The wrapper for Google Gemini chat model, e.g. gemini-pro

static format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

This function provide a basic prompting strategy for Gemini Chat API in multi-party conversation, which combines all input into a single string, and wrap it into a user message.

We make the above decision based on the following constraints of the Gemini generate API:

1. In Gemini generate_content API, the role field must be either user or model.

2. If we pass a list of messages to the generate_content API, the user role must speak in the beginning and end of the messages, and user and model must alternative. This prevents us to build a multi-party conversations, where model may keep speaking in different names.

The above information is updated to 2024/03/21. More information about the Gemini generate_content API can be found in https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini

Based on the above considerations, we decide to combine all messages into a single user message. This is a simple and straightforward strategy, if you have any better ideas, pull request and discussion are welcome in our GitHub repository https://github.com/agentscope/agentscope!

Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

A list with one user message.

Return type:

List[dict]

generation_method = 'generateContent'

The generation method used in __call__ function.

model_type: str = 'gemini_chat'

The type of the model, which is used in model configuration.

class GeminiEmbeddingWrapper(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any)[source]

Bases: GeminiWrapperBase

The wrapper for Google Gemini embedding model, e.g. models/embedding-001

Response:
{
    "embeddings": [
        {
            object (ContentEmbedding)
        }
    ]
}
model_type: str = 'gemini_embedding'

The type of the model, which is used in model configuration.

class LiteLLMChatWrapper(config_name: str, model_name: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: LiteLLMWrapperBase

The model wrapper based on litellm chat API.

Note

  • litellm requires the users to set api key in their environment

  • Different LLMs requires different environment variables

Example

  • For OpenAI models, set “OPENAI_API_KEY”

  • For models like “claude-2”, set “ANTHROPIC_API_KEY”

  • For Azure OpenAI models, you need to set “AZURE_API_KEY”,

“AZURE_API_BASE” and “AZURE_API_VERSION” - Refer to the docs in https://docs.litellm.ai/docs/ .

Response:
{
    'choices': [
        {
            'finish_reason': str,  # String: 'stop'
            'index': int,  # Integer: 0
            'message': {  # Dictionary [str, str]
                'role': str,  # String: 'assistant'
                'content': str  # String: "default message"
            }
        }
    ],
    'created': str,  # String: None
    'model': str,  # String: None
    'usage': {  # Dictionary [str, int]
        'prompt_tokens': int,  # Integer
        'completion_tokens': int,  # Integer
        'total_tokens': int  # Integer
    }
}
format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

List[dict]

model_type: str = 'litellm_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class ModelResponse(text: str | None = None, embedding: Sequence | None = None, image_urls: Sequence[str] | None = None, raw: Any | None = None, parsed: Any | None = None, stream: Generator[str, None, None] | None = None, tool_calls: list[ToolUseBlock] | None = None)[source]

Bases: object

Encapsulation of data returned by the model.

The main purpose of this class is to align the return formats of different models and act as a bridge between models and agents.

property is_stream_exhausted: bool

Whether the stream has been processed already.

property stream: None | Generator[Tuple[bool, str], None, None]

Return the stream generator if it exists.

property text: str | None

Return the text field. If the stream field is available, the text field will be updated accordingly.

class ModelWrapperBase(config_name: str | None = None, model_name: str | None = None, **kwargs: Any)[source]

Bases: ABC

The base class for model wrapper.

format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict] | str[source]

Format the input messages into the format that the model API required.

format_tools_json_schemas(schemas: dict[str, dict]) list[dict][source]

Format the JSON schemas of the tool functions to the format that the model API provider expects.

Example

An example of the input schemas parsed from the service toolkit

..code-block:: json

{
“bing_search”: {

“type”: “function”, “function”: {

“name”: “bing_search”, “description”: “Search the web using Bing.”, “parameters”: {

“type”: “object”, “properties”: {

“query”: {

“type”: “string”, “description”: “The search query.”,

}

}, “required”: [“query”],

}

}

}

}

Parameters:

schemas (dict[str, dict]) – The tools JSON schemas parsed from the service toolkit module, which can be accessed by service_toolkit.json_schemas.

Returns:

The formatted JSON schemas of the tool functions.

Return type:

list[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

model_type: str

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OllamaChatWrapper(config_name: str, model_name: str, stream: bool = False, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]

Bases: OllamaWrapperBase

The model wrapper for Ollama chat API.

Response:
  • Refer to

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion

{
    "model": "registry.ollama.ai/library/llama3:latest",
    "created_at": "2023-12-12T14:13:43.416799Z",
    "message": {
        "role": "assistant",
        "content": "Hello! How are you today?"
    },
    "done": true,
    "total_duration": 5191566416,
    "load_duration": 2154458,
    "prompt_eval_count": 26,
    "prompt_eval_duration": 383809000,
    "eval_count": 298,
    "eval_duration": 4799921000
}
format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

Format the messages for ollama Chat API.

All messages will be formatted into a single system message with system prompt and conversation history.

Note: 1. This strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies. 2. For ollama chat api, the content field shouldn’t be empty string.

Example:

prompt = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

List[dict]

model_type: str = 'ollama_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OllamaEmbeddingWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]

Bases: OllamaWrapperBase

The model wrapper for Ollama embedding API.

Response:
  • Refer to

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings

{
    "model": "all-minilm",
    "embeddings": [[
        0.010071029, -0.0017594862, 0.05007221, 0.04692972,
        0.008599704, 0.105441414, -0.025878139, 0.12958129,
    ]]
}
model_type: str = 'ollama_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OllamaGenerationWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]

Bases: OllamaWrapperBase

The model wrapper for Ollama generation API.

Response:
  • From

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion

{
    "model": "llama3",
    "created_at": "2023-08-04T19:22:45.499127Z",
    "response": "The sky is blue because it is the color of  sky.",
    "done": true,
    "context": [1, 2, 3],
    "total_duration": 5043500667,
    "load_duration": 5025959,
    "prompt_eval_count": 26,
    "prompt_eval_duration": 325953000,
    "eval_count": 290,
    "eval_duration": 4709213000
}
model_type: str = 'ollama_generate'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OpenAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: OpenAIWrapperBase

The model wrapper for OpenAI’s chat API.

format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

Format the input string and dictionary into the format that OpenAI Chat API required. If you’re using a OpenAI-compatible model without a prefix “gpt-” in its name, the format method will automatically format the input messages into the required format.

Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages in the format that OpenAI Chat API required.

Return type:

List[dict]

format_tools_json_schemas(schemas: dict[str, dict]) list[dict][source]

Format the JSON schemas of the tool functions to the format that the model API provider expects.

Example

An example of the input schemas parsed from the service toolkit

..code-block:: json

{
“bing_search”: {

“type”: “function”, “function”: {

“name”: “bing_search”, “description”: “Search the web using Bing.”, “parameters”: {

“type”: “object”, “properties”: {

“query”: {

“type”: “string”, “description”: “The search query.”,

}

}, “required”: [“query”],

}

}

}

}

Parameters:

schemas (dict[str, dict]) – The tools JSON schemas parsed from the service toolkit module, which can be accessed by service_toolkit.json_schemas.

Returns:

The formatted JSON schemas of the tool functions.

Return type:

list[dict]

model_type: str = 'openai_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OpenAIDALLEWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: OpenAIWrapperBase

The model wrapper for OpenAI’s DALL·E API.

Response:
{
    "created": 1589478378,
    "data": [
        {
            "url": "https://..."
        },
        {
            "url": "https://..."
        }
    ]
}
model_type: str = 'openai_dall_e'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OpenAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: OpenAIWrapperBase

The model wrapper for OpenAI embedding API.

Response:
  • Refer to

https://platform.openai.com/docs/api-reference/embeddings/create

{
    "object": "list",
    "data": [
        {
            "object": "embedding",
            "embedding": [
                0.0023064255,
                -0.009327292,
                .... (1536 floats total for ada-002)
                -0.0028842222,
            ],
            "index": 0
        }
    ],
    "model": "text-embedding-ada-002",
    "usage": {
        "prompt_tokens": 8,
        "total_tokens": 8
    }
}
model_type: str = 'openai_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class OpenAIWrapperBase(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: ModelWrapperBase, ABC

The model wrapper for OpenAI API.

Response:
{
    "id": "chatcmpl-123",
    "object": "chat.completion",
    "created": 1677652288,
    "model": "gpt-4o-mini",
    "system_fingerprint": "fp_44709d6fcb",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "Hello there, how may I assist you?",
            },
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 9,
        "completion_tokens": 12,
        "total_tokens": 21
    }
}
class PostAPIChatWrapper(config_name: str, api_url: str, model_name: str | None = None, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'messages', retry_interval: int = 1, **kwargs: Any)[source]

Bases: PostAPIModelWrapperBase

A post api model wrapper compatible with openai chat, e.g., vLLM, FastChat.

format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

Format the input messages into a list of dict according to the model name. For example, if the model name is prefixed with “gpt-”, the input messages will be formatted for OpenAI models.

Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

Union[List[dict]]

model_type: str = 'post_api_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class PostAPIModelWrapperBase(config_name: str, api_url: str, model_name: str | None = None, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'messages', retry_interval: int = 1, **kwargs: Any)[source]

Bases: ModelWrapperBase, ABC

The base model wrapper for the model deployed on the POST API.

model_type: str

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class YiChatWrapper(config_name: str, model_name: str, api_key: str, max_tokens: int | None = None, top_p: float = 0.9, temperature: float = 0.3, stream: bool = False)[source]

Bases: ModelWrapperBase

The model wrapper for Yi Chat API.

Response:
{
    "id": "cmpl-ea89ae83",
    "object": "chat.completion",
    "created": 5785971,
    "model": "yi-large-rag",
    "usage": {
        "completion_tokens": 113,
        "prompt_tokens": 896,
        "total_tokens": 1009
    },
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "Today in Los Angeles, the weather ...",
            },
            "finish_reason": "stop"
        }
    ]
}
format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

Format the messages into the required format of Yi Chat API.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

List[dict]

model_type: str = 'yi_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class ZhipuAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: ZhipuAIWrapperBase

The model wrapper for ZhipuAI’s chat API.

Response:
{
    "created": 1703487403,
    "id": "8239375684858666781",
    "model": "glm-4",
    "request_id": "8239375684858666781",
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "message": {
                "content": "Drawing blueprints with ...",
                "role": "assistant"
            }
        }
    ],
    "usage": {
        "completion_tokens": 217,
        "prompt_tokens": 31,
        "total_tokens": 248
    }
}
format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict][source]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:
  • args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None will be ignored.

  • multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.

Returns:

The formatted messages.

Return type:

List[dict]

model_type: str = 'zhipuai_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class ZhipuAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: ZhipuAIWrapperBase

The model wrapper for ZhipuAI embedding API.

Example Response:

{
    "model": "embedding-2",
    "data": [
        {
            "embedding": [ (a total of 1024 elements)
                -0.02675454691052437,
                0.019060475751757622,
                ......
                -0.005519774276763201,
                0.014949671924114227
            ],
            "index": 0,
            "object": "embedding"
        }
    ],
    "object": "list",
    "usage": {
        "completion_tokens": 0,
        "prompt_tokens": 4,
        "total_tokens": 4
    }
}
model_type: str = 'zhipuai_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.