agentscope.models.model module

The configuration file should contain one or a list of model configs, and each model config should follow the following format.

{
    "config_name": "{config_name}",
    "model_type": "openai_chat" | "post_api" | ...,
    ...
}

After that, you can specify model by {config_name}.

备注

The parameters for different types of models are different. For OpenAI API, the format is:

{
    "config_name": "{id of your model}",
    "model_type": "openai_chat",
    "model_name": "{model_name_for_openai, e.g. gpt-3.5-turbo}",
    "api_key": "{your_api_key}",
    "organization": "{your_organization, if needed}",
    "client_args": {
        # ...
    },
    "generate_args": {
        # ...
    }
}

For Post API, toking huggingface inference API as an example, its format is:

{
    "config_name": "{config_name}",
    "model_type": "post_api",
    "api_url": "{api_url}",
    "headers": {"Authorization": "Bearer {API_TOKEN}"},
    "max_length": {max_length_of_model},
    "timeout": {timeout},
    "max_retries": {max_retries},
    "generate_args": {
        "temperature": 0.5,
        # ...
    }
}
class agentscope.models.model.ModelWrapperBase(config_name: str, model_name: str, **kwargs: Any)[源代码]

基类:object

The base class for model wrapper.

model_type: str

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

__init__(config_name: str, model_name: str, **kwargs: Any) None[源代码]

Base class for model wrapper.

All model wrappers should inherit this class and implement the __call__ function.

参数:
  • config_name (str) – The id of the model, which is used to extract configuration from the config file.

  • model_name (str) – The name of the model.

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

classmethod get_wrapper(model_type: str) Type[ModelWrapperBase][源代码]

Get the specific model wrapper

format(*args: Msg | Sequence[Msg]) List[dict] | str[源代码]

Format the input messages into the format that the model API required.

static format_for_common_chat_models(*args: Msg | Sequence[Msg]) List[dict][源代码]

A common format strategy for chat models, which will format the input messages into a system message (if provided) and a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
参数:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]