agentscope.models.litellm_model module

Model wrapper based on litellm https://docs.litellm.ai/docs/

class LiteLLMChatWrapper(config_name: str, model_name: str | None = None, stream: bool = False, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: LiteLLMWrapperBase

The model wrapper based on litellm chat API.

Note

  • litellm requires the users to set api key in their environment

  • Different LLMs requires different environment variables

Example

  • For OpenAI models, set “OPENAI_API_KEY”

  • For models like “claude-2”, set “ANTHROPIC_API_KEY”

  • For Azure OpenAI models, you need to set “AZURE_API_KEY”,

“AZURE_API_BASE” and “AZURE_API_VERSION” - Refer to the docs in https://docs.litellm.ai/docs/ .

Response:
{
    'choices': [
        {
            'finish_reason': str,  # String: 'stop'
            'index': int,  # Integer: 0
            'message': {  # Dictionary [str, str]
                'role': str,  # String: 'assistant'
                'content': str  # String: "default message"
            }
        }
    ],
    'created': str,  # String: None
    'model': str,  # String: None
    'usage': {  # Dictionary [str, int]
        'prompt_tokens': int,  # Integer
        'completion_tokens': int,  # Integer
        'total_tokens': int  # Integer
    }
}
format(*args: Msg | Sequence[Msg]) List[dict][source]

A common format strategy for chat models, which will format the input messages into a user message.

Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt1 = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

prompt2 = model.format(
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

# prompt1
[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]

# prompt2
[
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
Parameters:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

Returns:

The formatted messages.

Return type:

List[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

model_type: str = 'litellm_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class LiteLLMWrapperBase(config_name: str, model_name: str | None = None, generate_args: dict | None = None, **kwargs: Any)[source]

Bases: ModelWrapperBase, ABC

The model wrapper based on LiteLLM API.

format(*args: Msg | Sequence[Msg]) List[dict] | str[source]

Format the input messages into the format that the model API required.