agentscope.models.yi_model module
Model wrapper for Yi models
- class agentscope.models.yi_model.YiChatWrapper(config_name: str, model_name: str, api_key: str, max_tokens: int | None = None, top_p: float = 0.9, temperature: float = 0.3, stream: bool = False)[source]
Bases:
ModelWrapperBase
The model wrapper for Yi Chat API.
- Response:
-
“id”: “cmpl-ea89ae83”, “object”: “chat.completion”, “created”: 5785971, “model”: “yi-large-rag”, “usage”: {
“completion_tokens”: 113, “prompt_tokens”: 896, “total_tokens”: 1009
}, “choices”: [
- {
“index”: 0, “message”: {
“role”: “assistant”, “content”: “Today in Los Angeles, the weather …”,
}, “finish_reason”: “stop”
}
]
- model_type: str = 'yi_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- __init__(config_name: str, model_name: str, api_key: str, max_tokens: int | None = None, top_p: float = 0.9, temperature: float = 0.3, stream: bool = False) None [source]
Initialize the Yi chat model wrapper.
- Parameters:
config_name (str) – The name of the configuration to use.
model_name (str) – The name of the model to use, e.g. yi-large, yi-medium, etc.
api_key (str) – The API key for the Yi API.
max_tokens (Optional[int], defaults to None) – The maximum number of tokens to generate, defaults to None.
top_p (float, defaults to 0.9) – The randomness parameters in the range [0, 1].
temperature (float, defaults to 0.3) – The temperature parameter in the range [0, 2].
stream (bool, defaults to False) – Whether to stream the response or not.
- format(*args: Msg | Sequence[Msg]) List[dict] [source]
Format the messages into the required format of Yi Chat API.
Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.
The following is an example:
prompt1 = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
# prompt1 [ { "role": "system", "content": "You're a helpful assistant" }, { "role": "user", "content": ( "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.
- Returns:
The formatted messages.
- Return type:
List[dict]
- config_name: str
The name of the model configuration.
- model_name: str
The name of the model, which is used in model api calling.