agentscope.models.zhipu_model module

Model wrapper for ZhipuAI models

class agentscope.models.zhipu_model.ZhipuAIWrapperBase(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:ModelWrapperBase, ABC

The model wrapper for ZhipuAI API.

__init__(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any) None[源代码]

Initialize the zhipuai client. To init the ZhipuAi client, the api_key is required. Other client args include base_url and timeout. The base_url is set to https://open.bigmodel.cn/api/paas/v4 if not specified. The timeout arg is set for http request timeout.

参数:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in ZhipuAI API.

  • api_key (str, default None) – The API key for ZhipuAI API. If not specified, it will be read from the environment variable.

  • client_args (dict, default None) – The extra keyword arguments to initialize the ZhipuAI client.

  • generate_args (dict, default None) – The extra keyword arguments used in zhipuai api generation, e.g. temperature, seed.

format(*args: Msg | Sequence[Msg]) List[dict] | str[源代码]

Format the input messages into the format that the model API required.

class agentscope.models.zhipu_model.ZhipuAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:ZhipuAIWrapperBase

The model wrapper for ZhipuAI’s chat API.

Response:

```json {

“created”: 1703487403, “id”: “8239375684858666781”, “model”: “glm-4”, “request_id”: “8239375684858666781”, “choices”: [

{

“finish_reason”: “stop”, “index”: 0, “message”: {

“content”: “Drawing blueprints with …”, “role”: “assistant”

}

}

], “usage”: {

“completion_tokens”: 217, “prompt_tokens”: 31, “total_tokens”: 248

}

model_type: str = 'zhipuai_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

__init__(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any) None[源代码]

Initialize the zhipuai client. To init the ZhipuAi client, the api_key is required. Other client args include base_url and timeout. The base_url is set to https://open.bigmodel.cn/api/paas/v4 if not specified. The timeout arg is set for http request timeout.

参数:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in ZhipuAI API.

  • api_key (str, default None) – The API key for ZhipuAI API. If not specified, it will be read from the environment variable.

  • stream (bool, default False) – Whether to enable stream mode.

  • generate_args (dict, default None) – The extra keyword arguments used in zhipuai api generation, e.g. temperature, seed.

format(*args: Msg | Sequence[Msg]) List[dict][源代码]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
参数:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

class agentscope.models.zhipu_model.ZhipuAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:ZhipuAIWrapperBase

The model wrapper for ZhipuAI embedding API.

Example Response:

```json {

“model”: “embedding-2”, “data”: [

{
“embedding”: [ (a total of 1024 elements)

-0.02675454691052437, 0.019060475751757622, …… -0.005519774276763201, 0.014949671924114227

], “index”: 0, “object”: “embedding”

}

], “object”: “list”, “usage”: {

“completion_tokens”: 0, “prompt_tokens”: 4, “total_tokens”: 4

}

}

model_type: str = 'zhipuai_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.