agentscope.models package
Submodules
- agentscope.models.dashscope_model module
- agentscope.models.gemini_model module
- agentscope.models.litellm_model module
- agentscope.models.model module
- agentscope.models.ollama_model module
- agentscope.models.openai_model module
- agentscope.models.post_model module
- agentscope.models.response module
- agentscope.models.yi_model module
- agentscope.models.zhipu_model module
Module contents
Import modules in models package.
- class agentscope.models.ModelWrapperBase(config_name: str, model_name: str, **kwargs: Any)[source]
Bases:
object
The base class for model wrapper.
- model_type: str
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, model_name: str, **kwargs: Any) None [source]
Base class for model wrapper.
All model wrappers should inherit this class and implement the __call__ function.
- Parameters:
config_name (str) – The id of the model, which is used to extract configuration from the config file.
model_name (str) – The name of the model.
- config_name: str
The name of the model configuration.
- model_name: str
The name of the model, which is used in model api calling.
- classmethod get_wrapper(model_type: str) Type[ModelWrapperBase] [source]
Get the specific model wrapper
- format(*args: Msg | Sequence[Msg]) List[dict] | str [source]
Format the input string or dict into the format that the model API required.
- static format_for_common_chat_models(*args: Msg | Sequence[Msg]) List[dict] [source]
A common format strategy for chat models, which will format the input messages into a user message.
Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt1 = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") ) prompt2 = model.format( Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
# prompt1 [ { "role": "user", "content": ( "You're a helpful assistant\n" "\n" "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ] # prompt2 [ { "role": "user", "content": ( "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- class agentscope.models.ModelResponse(text: str | None = None, embedding: Sequence | None = None, image_urls: Sequence[str] | None = None, raw: Any | None = None, parsed: Any | None = None, stream: Generator[str, None, None] | None = None)[source]
Bases:
object
Encapsulation of data returned by the model.
The main purpose of this class is to align the return formats of different models and act as a bridge between models and agents.
- __init__(text: str | None = None, embedding: Sequence | None = None, image_urls: Sequence[str] | None = None, raw: Any | None = None, parsed: Any | None = None, stream: Generator[str, None, None] | None = None) None [source]
Initialize the model response.
- Parameters:
text (str, optional) – The text field.
embedding (Sequence, optional) – The embedding returned by the model.
image_urls (Sequence[str], optional) – The image URLs returned by the model.
raw (Any, optional) – The raw data returned by the model.
parsed (Any, optional) – The parsed data returned by the model.
stream (Generator, optional) – The stream data returned by the model.
- property text: str
Return the text field. If the stream field is available, the text field will be updated accordingly.
- property stream: None | Generator[Tuple[bool, str], None, None]
Return the stream generator if it exists.
- property is_stream_exhausted: bool
Whether the stream has been processed already.
- class agentscope.models.PostAPIModelWrapperBase(config_name: str, api_url: str, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'messages', retry_interval: int = 1, **kwargs: Any)[source]
Bases:
ModelWrapperBase
,ABC
The base model wrapper for the model deployed on the POST API.
- model_type: str = 'post_api'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, api_url: str, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'messages', retry_interval: int = 1, **kwargs: Any) None [source]
Initialize the model wrapper.
- Parameters:
config_name (str) – The id of the model.
api_url (str) – The url of the post request api.
headers (dict, defaults to None) – The headers of the api. Defaults to None.
max_length (int, defaults to 2048) – The maximum length of the model.
timeout (int, defaults to 30) – The timeout of the api. Defaults to 30.
json_args (dict, defaults to None) – The json arguments of the api. Defaults to None.
post_args (dict, defaults to None) – The post arguments of the api. Defaults to None.
max_retries (int, defaults to 3) – The maximum number of retries when the parse_func raise an exception.
messages_key (str, defaults to inputs) – The key of the input messages in the json argument.
retry_interval (int, defaults to 1) – The interval between retries when a request fails.
Note
When an object of PostApiModelWrapper is called, the arguments will of post requests will be used as follows:
request.post( url=api_url, headers=headers, json={ messages_key: messages, **json_args }, **post_args )
- class agentscope.models.PostAPIChatWrapper(config_name: str, api_url: str, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'messages', retry_interval: int = 1, **kwargs: Any)[source]
Bases:
PostAPIModelWrapperBase
A post api model wrapper compatible with openai chat, e.g., vLLM, FastChat.
- model_type: str = 'post_api_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
Format the input messages into a list of dict, which is compatible to OpenAI Chat API.
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
Union[List[dict]]
- class agentscope.models.OpenAIWrapperBase(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
ModelWrapperBase
,ABC
The model wrapper for OpenAI API.
- Response:
-
“id”: “chatcmpl-123”, “object”: “chat.completion”, “created”: 1677652288, “model”: “gpt-4o-mini”, “system_fingerprint”: “fp_44709d6fcb”, “choices”: [
- {
“index”: 0, “message”: {
“role”: “assistant”, “content”: “Hello there, how may I assist you today?”,
}, “logprobs”: null, “finish_reason”: “stop”
}
], “usage”: {
“prompt_tokens”: 9, “completion_tokens”: 12, “total_tokens”: 21
}
- __init__(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any) None [source]
Initialize the openai client.
- Parameters:
config_name (str) – The name of the model config.
model_name (str, default None) – The name of the model to use in OpenAI API.
api_key (str, default None) – The API key for OpenAI API. If not specified, it will be read from the environment variable OPENAI_API_KEY.
organization (str, default None) – The organization ID for OpenAI API. If not specified, it will be read from the environment variable OPENAI_ORGANIZATION.
client_args (dict, default None) – The extra keyword arguments to initialize the OpenAI client.
generate_args (dict, default None) – The extra keyword arguments used in openai api generation, e.g. temperature, seed.
- class agentscope.models.OpenAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
OpenAIWrapperBase
The model wrapper for OpenAI’s chat API.
- model_type: str = 'openai_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- deprecated_model_type: str = 'openai'
- substrings_in_vision_models_names = ['gpt-4-turbo', 'vision', 'gpt-4o']
The substrings in the model names of vision models.
- __init__(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any) None [source]
Initialize the openai client.
- Parameters:
config_name (str) – The name of the model config.
model_name (str, default None) – The name of the model to use in OpenAI API.
api_key (str, default None) – The API key for OpenAI API. If not specified, it will be read from the environment variable OPENAI_API_KEY.
organization (str, default None) – The organization ID for OpenAI API. If not specified, it will be read from the environment variable OPENAI_ORGANIZATION.
client_args (dict, default None) – The extra keyword arguments to initialize the OpenAI client.
stream (bool, default False) – Whether to enable stream mode.
generate_args (dict, default None) – The extra keyword arguments used in openai api generation, e.g. temperature, seed.
- static static_format(*args: Msg | Sequence[Msg], model_name: str) List[dict] [source]
A static version of the format method, which can be used without initializing the OpenAIChatWrapper object.
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
model_name (str) – The name of the model to use in OpenAI API.
- Returns:
The formatted messages in the format that OpenAI Chat API required.
- Return type:
List[dict]
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
Format the input string and dictionary into the format that OpenAI Chat API required.
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages in the format that OpenAI Chat API required.
- Return type:
List[dict]
- class agentscope.models.OpenAIDALLEWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
OpenAIWrapperBase
The model wrapper for OpenAI’s DALL·E API.
- Response:
-
- model_type: str = 'openai_dall_e'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- class agentscope.models.OpenAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
OpenAIWrapperBase
The model wrapper for OpenAI embedding API.
- Response:
Refer to
https://platform.openai.com/docs/api-reference/embeddings/create
“object”: “list”, “data”: [
- {
“object”: “embedding”, “embedding”: [
0.0023064255, -0.009327292, …. (1536 floats total for ada-002) -0.0028842222,
], “index”: 0
}
], “model”: “text-embedding-ada-002”, “usage”: {
“prompt_tokens”: 8, “total_tokens”: 8
}
- model_type: str = 'openai_embedding'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- class agentscope.models.DashScopeChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
DashScopeWrapperBase
The model wrapper for DashScope’s chat API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/api-details
- Response:
Refer to
“status_code”: 200, “request_id”: “a75a1b22-e512-957d-891b-37db858ae738”, “code”: “”, “message”: “”, “output”: {
“text”: null, “finish_reason”: null, “choices”: [
- {
“finish_reason”: “stop”, “message”: {
“role”: “assistant”, “content”: “xxx”
}
}
]
}, “usage”: {
“input_tokens”: 25, “output_tokens”: 77, “total_tokens”: 102
}
- model_type: str = 'dashscope_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- deprecated_model_type: str = 'tongyi_chat'
- __init__(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any) None [source]
Initialize the DashScope wrapper.
- Parameters:
config_name (str) – The name of the model config.
model_name (str, default None) – The name of the model to use in DashScope API.
api_key (str, default None) – The API key for DashScope API.
stream (bool, default False) – If True, the response will be a generator in the stream field of the returned ModelResponse object.
generate_args (dict, default None) – The extra keyword arguments used in DashScope api generation, e.g. temperature, seed.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
A common format strategy for chat models, which will format the input messages into a user message.
Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt1 = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") ) prompt2 = model.format( Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
# prompt1 [ { "role": "user", "content": ( "You're a helpful assistant\n" "\n" "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ] # prompt2 [ { "role": "user", "content": ( "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- class agentscope.models.DashScopeImageSynthesisWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
DashScopeWrapperBase
The model wrapper for DashScope Image Synthesis API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/quick-start-1
- Response:
Refer to
“status_code”: 200, “request_id”: “b54ffeb8-6212-9dac-808c-b3771cba3788”, “code”: null, “message”: “”, “output”: {
“task_id”: “996523eb-034d-459b-ac88-b340b95007a4”, “task_status”: “SUCCEEDED”, “results”: [
- {
“url”: “RESULT_URL1”
}, {
“url”: “RESULT_URL2”
},
], “task_metrics”: {
“TOTAL”: 2, “SUCCEEDED”: 2, “FAILED”: 0
}
}, “usage”: {
“image_count”: 2
}
- model_type: str = 'dashscope_image_synthesis'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- class agentscope.models.DashScopeTextEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
DashScopeWrapperBase
The model wrapper for DashScope Text Embedding API.
- Response:
Refer to
“status_code”: 200, // 200 indicate success otherwise failed. “request_id”: “fd564688-43f7-9595-b986”, // The request id. “code”: “”, // If failed, the error code. “message”: “”, // If failed, the error message. “output”: {
- “embeddings”: [ // embeddings
- {
- “embedding”: [ // one embedding output
-3.8450357913970947, …,
], “text_index”: 0 // the input index.
}
]
}, “usage”: {
“total_tokens”: 3 // the request tokens.
}
- model_type: str = 'dashscope_text_embedding'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- class agentscope.models.DashScopeMultiModalWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
DashScopeWrapperBase
The model wrapper for DashScope Multimodal API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-api
- Response:
Refer to
“status_code”: 200, “request_id”: “a0dc436c-2ee7-93e0-9667-c462009dec4d”, “code”: “”, “message”: “”, “output”: {
“text”: null, “finish_reason”: null, “choices”: [
- {
“finish_reason”: “stop”, “message”: {
“role”: “assistant”, “content”: [
- {
“text”: “这张图片显…”
}
]
}
}
]
}, “usage”: {
“input_tokens”: 1277, “output_tokens”: 81, “image_tokens”: 1247
}
- model_type: str = 'dashscope_multimodal'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- format(*args: Msg | Sequence[Msg]) List [source]
Format the messages for DashScope Multimodal API.
The multimodal API has the following requirements:
- The roles of messages must alternate between “user” and
“assistant”.
- The message with the role “system” should be the first message
in the list.
- If the system message exists, then the second message must
have the role “user”.
The last message in the list should have the role “user”.
In each message, more than one figure is allowed.
With the above requirements, we format the messages as follows:
- If the first message is a system message, then we will keep it as
system prompt.
- We merge all messages into a conversation history prompt in a
single message with the role “user”.
- When there are multiple figures in the given messages, we will
attach it to the user message by order. Note if there are multiple figures, this strategy may cause misunderstanding for the model. For advanced solutions, developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt = model.format( Msg( "system", "You're a helpful assistant", role="system", url="figure1" ), Msg( "Bob", "How about this picture?", role="assistant", url="figure2" ), Msg( "user", "It's wonderful! How about mine?", role="user", image="figure3" ) )
The prompt will be as follows:
[ { "role": "system", "content": [ {"text": "You are a helpful assistant"}, {"image": "figure1"} ] }, { "role": "user", "content": [ {"image": "figure2"}, {"image": "figure3"}, { "text": ( "## Conversation History\n" "Bob: How about this picture?\n" "user: It's wonderful! How about mine?" ) }, ] } ]
Note
In multimodal API, the url of local files should be prefixed with “file://”, which will be attached in this format function.
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- convert_url(url: str | Sequence[str] | None) List[dict] [source]
Convert the url to the format of DashScope API. Note for local files, a prefix “file://” will be added.
- Parameters:
url (Union[str, Sequence[str], None]) – A string of url of a list of urls to be converted.
- Returns:
A list of dictionaries with key as the type of the url and value as the url. Only “image” and “audio” are supported.
- Return type:
List[dict]
- class agentscope.models.OllamaChatWrapper(config_name: str, model_name: str, stream: bool = False, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
OllamaWrapperBase
The model wrapper for Ollama chat API.
- Response:
Refer to
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion
“model”: “registry.ollama.ai/library/llama3:latest”, “created_at”: “2023-12-12T14:13:43.416799Z”, “message”: {
“role”: “assistant”, “content”: “Hello! How are you today?”
}, “done”: true, “total_duration”: 5191566416, “load_duration”: 2154458, “prompt_eval_count”: 26, “prompt_eval_duration”: 383809000, “eval_count”: 298, “eval_duration”: 4799921000
- model_type: str = 'ollama_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, model_name: str, stream: bool = False, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any) None [source]
Initialize the model wrapper for Ollama API.
- Parameters:
model_name (str) – The model name used in ollama API.
stream (bool, default False) – Whether to enable stream mode.
options (dict, default None) – The extra keyword arguments used in Ollama api generation, e.g. {“temperature”: 0., “seed”: 123}.
keep_alive (str, default 5m) – Controls how long the model will stay loaded into memory following the request.
host (str, default None) – The host port of the ollama server. Defaults to None, which is 127.0.0.1:11434.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
Format the messages for ollama Chat API.
All messages will be formatted into a single system message with system prompt and conversation history.
Note: 1. This strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies. 2. For ollama chat api, the content field shouldn’t be empty string.
Example:
prompt = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
[ { "role": "user", "content": ( "You're a helpful assistant\n\n" "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- class agentscope.models.OllamaEmbeddingWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
OllamaWrapperBase
The model wrapper for Ollama embedding API.
- Response:
Refer to
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings
“model”: “all-minilm”, “embeddings”: [[
0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.008599704, 0.105441414, -0.025878139, 0.12958129,
]]
- model_type: str = 'ollama_embedding'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- class agentscope.models.OllamaGenerationWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
OllamaWrapperBase
The model wrapper for Ollama generation API.
- Response:
From
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion
“model”: “llama3”, “created_at”: “2023-08-04T19:22:45.499127Z”, “response”: “The sky is blue because it is the color of the sky.”, “done”: true, “context”: [1, 2, 3], “total_duration”: 5043500667, “load_duration”: 5025959, “prompt_eval_count”: 26, “prompt_eval_duration”: 325953000, “eval_count”: 290, “eval_duration”: 4709213000
- model_type: str = 'ollama_generate'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- format(*args: Msg | Sequence[Msg]) str [source]
Forward the input to the model.
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted string prompt.
- Return type:
str
- class agentscope.models.GeminiChatWrapper(config_name: str, model_name: str, api_key: str | None = None, stream: bool = False, **kwargs: Any)[source]
Bases:
GeminiWrapperBase
The wrapper for Google Gemini chat model, e.g. gemini-pro
- model_type: str = 'gemini_chat'
The type of the model, which is used in model configuration.
- generation_method = 'generateContent'
The generation method used in __call__ function.
- __init__(config_name: str, model_name: str, api_key: str | None = None, stream: bool = False, **kwargs: Any) None [source]
Initialize the wrapper for Google Gemini model.
- Parameters:
model_name (str) – The name of the model.
api_key (str, defaults to None) – The api_key for the model. If it is not provided, it will be loaded from environment variable.
stream (bool, defaults to False) – Whether to use stream mode.
- static format(*args: Msg | Sequence[Msg]) List[dict] [source]
This function provide a basic prompting strategy for Gemini Chat API in multi-party conversation, which combines all input into a single string, and wrap it into a user message.
We make the above decision based on the following constraints of the Gemini generate API:
1. In Gemini generate_content API, the role field must be either user or model.
2. If we pass a list of messages to the generate_content API, the user role must speak in the beginning and end of the messages, and user and model must alternative. This prevents us to build a multi-party conversations, where model may keep speaking in different names.
The above information is updated to 2024/03/21. More information about the Gemini generate_content API can be found in https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini
Based on the above considerations, we decide to combine all messages into a single user message. This is a simple and straightforward strategy, if you have any better ideas, pull request and discussion are welcome in our GitHub repository https://github.com/agentscope/agentscope!
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
A list with one user message.
- Return type:
List[dict]
- class agentscope.models.GeminiEmbeddingWrapper(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any)[source]
Bases:
GeminiWrapperBase
The wrapper for Google Gemini embedding model, e.g. models/embedding-001
- Response:
-
- “embeddings”: [
- {
object (ContentEmbedding)
}
]
- model_type: str = 'gemini_embedding'
The type of the model, which is used in model configuration.
- class agentscope.models.ZhipuAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
ZhipuAIWrapperBase
The model wrapper for ZhipuAI’s chat API.
- Response:
-
“created”: 1703487403, “id”: “8239375684858666781”, “model”: “glm-4”, “request_id”: “8239375684858666781”, “choices”: [
- {
“finish_reason”: “stop”, “index”: 0, “message”: {
“content”: “Drawing blueprints with …”, “role”: “assistant”
}
}
], “usage”: {
“completion_tokens”: 217, “prompt_tokens”: 31, “total_tokens”: 248
}
- model_type: str = 'zhipuai_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, model_name: str | None = None, api_key: str | None = None, stream: bool = False, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any) None [source]
Initialize the zhipuai client. To init the ZhipuAi client, the api_key is required. Other client args include base_url and timeout. The base_url is set to https://open.bigmodel.cn/api/paas/v4 if not specified. The timeout arg is set for http request timeout.
- Parameters:
config_name (str) – The name of the model config.
model_name (str, default None) – The name of the model to use in ZhipuAI API.
api_key (str, default None) – The API key for ZhipuAI API. If not specified, it will be read from the environment variable.
stream (bool, default False) – Whether to enable stream mode.
generate_args (dict, default None) – The extra keyword arguments used in zhipuai api generation, e.g. temperature, seed.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
A common format strategy for chat models, which will format the input messages into a user message.
Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt1 = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") ) prompt2 = model.format( Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
# prompt1 [ { "role": "user", "content": ( "You're a helpful assistant\n" "\n" "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ] # prompt2 [ { "role": "user", "content": ( "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- class agentscope.models.ZhipuAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
ZhipuAIWrapperBase
The model wrapper for ZhipuAI embedding API.
Example Response:
“model”: “embedding-2”, “data”: [
- {
- “embedding”: [ (a total of 1024 elements)
-0.02675454691052437, 0.019060475751757622, …… -0.005519774276763201, 0.014949671924114227
], “index”: 0, “object”: “embedding”
}
], “object”: “list”, “usage”: {
“completion_tokens”: 0, “prompt_tokens”: 4, “total_tokens”: 4
}
}
- model_type: str = 'zhipuai_embedding'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- class agentscope.models.LiteLLMChatWrapper(config_name: str, model_name: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]
Bases:
LiteLLMWrapperBase
The model wrapper based on litellm chat API.
Note
litellm requires the users to set api key in their environment
Different LLMs requires different environment variables
Example
For OpenAI models, set “OPENAI_API_KEY”
For models like “claude-2”, set “ANTHROPIC_API_KEY”
For Azure OpenAI models, you need to set “AZURE_API_KEY”,
“AZURE_API_BASE” and “AZURE_API_VERSION” - Refer to the docs in https://docs.litellm.ai/docs/ .
- Response:
-
- ‘choices’: [
- {
‘finish_reason’: str, # String: ‘stop’ ‘index’: int, # Integer: 0 ‘message’: { # Dictionary [str, str]
‘role’: str, # String: ‘assistant’ ‘content’: str # String: “default message”
}
}
], ‘created’: str, # String: None ‘model’: str, # String: None ‘usage’: { # Dictionary [str, int]
‘prompt_tokens’: int, # Integer ‘completion_tokens’: int, # Integer ‘total_tokens’: int # Integer
}
- model_type: str = 'litellm_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, model_name: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any) None [source]
To use the LiteLLM wrapper, environment variables must be set. Different model_name could be using different environment variables. For example:
for model_name: “gpt-3.5-turbo”, you need to set “OPENAI_API_KEY”
` os.environ["OPENAI_API_KEY"] = "your-api-key" `
- for model_name: “claude-2”, you need to set “ANTHROPIC_API_KEY” - for Azure OpenAI, you need to set “AZURE_API_KEY”, “AZURE_API_BASE”, “AZURE_API_VERSION”You should refer to the docs in https://docs.litellm.ai/docs/
- Parameters:
config_name (str) – The name of the model config.
model_name (str, default None) – The name of the model to use in OpenAI API.
stream (bool, default False) – Whether to enable stream mode.
generate_args (dict, default None) – The extra keyword arguments used in litellm api generation, e.g. temperature, seed. For generate_args, please refer to https://docs.litellm.ai/docs/completion/input for more details.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
A common format strategy for chat models, which will format the input messages into a user message.
Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt1 = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") ) prompt2 = model.format( Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
# prompt1 [ { "role": "user", "content": ( "You're a helpful assistant\n" "\n" "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ] # prompt2 [ { "role": "user", "content": ( "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- class agentscope.models.YiChatWrapper(config_name: str, model_name: str, api_key: str, max_tokens: int | None = None, top_p: float = 0.9, temperature: float = 0.3, stream: bool = False)[source]
Bases:
ModelWrapperBase
The model wrapper for Yi Chat API.
- Response:
-
“id”: “cmpl-ea89ae83”, “object”: “chat.completion”, “created”: 5785971, “model”: “yi-large-rag”, “usage”: {
“completion_tokens”: 113, “prompt_tokens”: 896, “total_tokens”: 1009
}, “choices”: [
- {
“index”: 0, “message”: {
“role”: “assistant”, “content”: “Today in Los Angeles, the weather …”,
}, “finish_reason”: “stop”
}
]
- model_type: str = 'yi_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, model_name: str, api_key: str, max_tokens: int | None = None, top_p: float = 0.9, temperature: float = 0.3, stream: bool = False) None [source]
Initialize the Yi chat model wrapper.
- Parameters:
config_name (str) – The name of the configuration to use.
model_name (str) – The name of the model to use, e.g. yi-large, yi-medium, etc.
api_key (str) – The API key for the Yi API.
max_tokens (Optional[int], defaults to None) – The maximum number of tokens to generate, defaults to None.
top_p (float, defaults to 0.9) – The randomness parameters in the range [0, 1].
temperature (float, defaults to 0.3) – The temperature parameter in the range [0, 2].
stream (bool, defaults to False) – Whether to stream the response or not.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
Format the messages into the required format of Yi Chat API.
Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt1 = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
# prompt1 [ { "role": "user", "content": ( "You're a helpful assistant\n" "\n" "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]