agentscope.models.litellm_model module

Model wrapper based on litellm https://docs.litellm.ai/docs/

class agentscope.models.litellm_model.LiteLLMWrapperBase(config_name: str, model_name: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: ModelWrapperBase, ABC

The model wrapper based on LiteLLM API.

__init__(config_name: str, model_name: str | None = None, generate_args: dict | None = None, **kwargs: Any) None[source]

To use the LiteLLM wrapper, environment variables must be set. Different model_name could be using different environment variables. For example:

  • for model_name: “gpt-3.5-turbo”, you need to set “OPENAI_API_KEY”

` os.environ["OPENAI_API_KEY"] = "your-api-key" ` - for model_name: “claude-2”, you need to set “ANTHROPIC_API_KEY” - for Azure OpenAI, you need to set “AZURE_API_KEY”, “AZURE_API_BASE”, “AZURE_API_VERSION”

You should refer to the docs in https://docs.litellm.ai/docs/

Parameters:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in OpenAI API.

  • generate_args (dict, default None) – The extra keyword arguments used in litellm api generation, e.g. temperature, seed. For generate_args, please refer to https://docs.litellm.ai/docs/completion/input for more details.

format(*args: Msg | Sequence[Msg]) List[dict] | str[source]

Format the input messages into the format that the model API required.

class agentscope.models.litellm_model.LiteLLMChatWrapper(config_name: str, model_name: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: LiteLLMWrapperBase

The model wrapper based on litellm chat API.

Note

  • litellm requires the users to set api key in their environment

  • Different LLMs requires different environment variables

Example

  • For OpenAI models, set “OPENAI_API_KEY”

  • For models like “claude-2”, set “ANTHROPIC_API_KEY”

  • For Azure OpenAI models, you need to set “AZURE_API_KEY”,

“AZURE_API_BASE” and “AZURE_API_VERSION” - Refer to the docs in https://docs.litellm.ai/docs/ .

Response:

```json {

‘choices’: [
{

‘finish_reason’: str, # String: ‘stop’ ‘index’: int, # Integer: 0 ‘message’: { # Dictionary [str, str]

‘role’: str, # String: ‘assistant’ ‘content’: str # String: “default message”

}

}

], ‘created’: str, # String: None ‘model’: str, # String: None ‘usage’: { # Dictionary [str, int]

‘prompt_tokens’: int, # Integer ‘completion_tokens’: int, # Integer ‘total_tokens’: int # Integer

}

model_type: str = 'litellm_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

__init__(config_name: str, model_name: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any) None[source]

To use the LiteLLM wrapper, environment variables must be set. Different model_name could be using different environment variables. For example:

  • for model_name: “gpt-3.5-turbo”, you need to set “OPENAI_API_KEY”

` os.environ["OPENAI_API_KEY"] = "your-api-key" ` - for model_name: “claude-2”, you need to set “ANTHROPIC_API_KEY” - for Azure OpenAI, you need to set “AZURE_API_KEY”, “AZURE_API_BASE”, “AZURE_API_VERSION”

You should refer to the docs in https://docs.litellm.ai/docs/

Parameters:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in OpenAI API.

  • stream (bool, default False) – Whether to enable stream mode.

  • generate_args (dict, default None) – The extra keyword arguments used in litellm api generation, e.g. temperature, seed. For generate_args, please refer to https://docs.litellm.ai/docs/completion/input for more details.

format(*args: Msg | Sequence[Msg]) List[dict][source]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

Returns:

The formatted messages.

Return type:

List[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.