agentscope.models.dashscope_model module

Model wrapper for DashScope models

class agentscope.models.dashscope_model.DashScopeWrapperBase(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:ModelWrapperBase, ABC

The model wrapper for DashScope API.

__init__(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any) None[源代码]

Initialize the DashScope wrapper.

参数:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in DashScope API.

  • api_key (str, default None) – The API key for DashScope API.

  • generate_args (dict, default None) – The extra keyword arguments used in DashScope api generation, e.g. temperature, seed.

format(*args: Msg | Sequence[Msg]) List[dict] | str[源代码]

Format the input string or dict into the format that the model API required.

class agentscope.models.dashscope_model.DashScopeChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope’s chat API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/api-details

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/quick-start?spm=a2c4g.11186623.0.0.7e346eb5RvirBw

```json {

“status_code”: 200, “request_id”: “a75a1b22-e512-957d-891b-37db858ae738”, “code”: “”, “message”: “”, “output”: {

“text”: null, “finish_reason”: null, “choices”: [

{

“finish_reason”: “stop”, “message”: {

“role”: “assistant”, “content”: “xxx”

}

}

]

}, “usage”: {

“input_tokens”: 25, “output_tokens”: 77, “total_tokens”: 102

}

model_type: str = 'dashscope_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

deprecated_model_type: str = 'tongyi_chat'
__init__(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any) None[源代码]

Initialize the DashScope wrapper.

参数:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in DashScope API.

  • api_key (str, default None) – The API key for DashScope API.

  • stream (bool, default False) – If True, the response will be a generator in the stream field of the returned ModelResponse object.

  • generate_args (dict, default None) – The extra keyword arguments used in DashScope api generation, e.g. temperature, seed.

format(*args: Msg | Sequence[Msg]) List[dict][源代码]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "user",
        "content": (
            "You're a helpful assistant\n"
            "\n"
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
参数:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

class agentscope.models.dashscope_model.DashScopeImageSynthesisWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope Image Synthesis API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/quick-start-1

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/api-details-9?spm=a2c4g.11186623.0.0.7108fa70Op6eqF

```json {

“status_code”: 200, “request_id”: “b54ffeb8-6212-9dac-808c-b3771cba3788”, “code”: null, “message”: “”, “output”: {

“task_id”: “996523eb-034d-459b-ac88-b340b95007a4”, “task_status”: “SUCCEEDED”, “results”: [

{

“url”: “RESULT_URL1”

}, {

“url”: “RESULT_URL2”

},

], “task_metrics”: {

“TOTAL”: 2, “SUCCEEDED”: 2, “FAILED”: 0

}

}, “usage”: {

“image_count”: 2

}

model_type: str = 'dashscope_image_synthesis'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

class agentscope.models.dashscope_model.DashScopeTextEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope Text Embedding API.

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/text-embedding-api-details?spm=a2c4g.11186623.0.i3

```json {

“status_code”: 200, // 200 indicate success otherwise failed. “request_id”: “fd564688-43f7-9595-b986”, // The request id. “code”: “”, // If failed, the error code. “message”: “”, // If failed, the error message. “output”: {

“embeddings”: [ // embeddings
{
“embedding”: [ // one embedding output

-3.8450357913970947, …,

], “text_index”: 0 // the input index.

}

]

}, “usage”: {

“total_tokens”: 3 // the request tokens.

}

model_type: str = 'dashscope_text_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

class agentscope.models.dashscope_model.DashScopeMultiModalWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope Multimodal API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-api

Response:
  • Refer to

https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api?spm=a2c4g.11186623.0.0.7fde1f5atQSalN

```json {

“status_code”: 200, “request_id”: “a0dc436c-2ee7-93e0-9667-c462009dec4d”, “code”: “”, “message”: “”, “output”: {

“text”: null, “finish_reason”: null, “choices”: [

{

“finish_reason”: “stop”, “message”: {

“role”: “assistant”, “content”: [

{

“text”: “这张图片显…”

}

]

}

}

]

}, “usage”: {

“input_tokens”: 1277, “output_tokens”: 81, “image_tokens”: 1247

}

model_type: str = 'dashscope_multimodal'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: Msg | Sequence[Msg]) List[源代码]

Format the messages for DashScope Multimodal API.

The multimodal API has the following requirements:

  • The roles of messages must alternate between “user” and

    “assistant”.

  • The message with the role “system” should be the first message

    in the list.

  • If the system message exists, then the second message must

    have the role “user”.

  • The last message in the list should have the role “user”.

  • In each message, more than one figure is allowed.

With the above requirements, we format the messages as follows:

  • If the first message is a system message, then we will keep it as

    system prompt.

  • We merge all messages into a conversation history prompt in a

    single message with the role “user”.

  • When there are multiple figures in the given messages, we will

    attach it to the user message by order. Note if there are multiple figures, this strategy may cause misunderstanding for the model. For advanced solutions, developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt = model.format(
    Msg(
        "system",
        "You're a helpful assistant",
        role="system", url="figure1"
    ),
    Msg(
        "Bob",
        "How about this picture?",
        role="assistant", url="figure2"
    ),
    Msg(
        "user",
        "It's wonderful! How about mine?",
        role="user", image="figure3"
    )
)

The prompt will be as follows:

[
    {
        "role": "system",
        "content": [
            {"text": "You are a helpful assistant"},
            {"image": "figure1"}
        ]
    },
    {
        "role": "user",
        "content": [
            {"image": "figure2"},
            {"image": "figure3"},
            {
                "text": (
                    "## Conversation History\n"
                    "Bob: How about this picture?\n"
                    "user: It's wonderful! How about mine?"
                )
            },
        ]
    }
]

备注

In multimodal API, the url of local files should be prefixed with “file://”, which will be attached in this format function.

参数:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

convert_url(url: str | Sequence[str] | None) List[dict][源代码]

Convert the url to the format of DashScope API. Note for local files, a prefix “file://” will be added.

参数:

url (Union[str, Sequence[str], None]) – A string of url of a list of urls to be converted.

返回:

A list of dictionaries with key as the type of the url and value as the url. Only “image” and “audio” are supported.

返回类型:

List[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.