agentscope.tokens module

The tokens interface for agentscope.

agentscope.tokens.count(model_name: str, messages: list[dict[str, str]]) int[源代码]

Count the number of tokens for the given model and messages.

参数:
  • model_name (str) – The name of the model.

  • messages (list[dict[str, str]]) – A list of dictionaries.

agentscope.tokens.count_openai_tokens(model_name: str, messages: list[dict[str, str]]) int[源代码]

Count the number of tokens for the given OpenAI Chat model and messages.

Refer to https://platform.openai.com/docs/advanced-usage/managing-tokens

参数:
  • model_name (str) – The name of the OpenAI Chat model, e.g. “gpt-4o”.

  • messages (list[dict[str, str]]) – A list of dictionaries. Each dictionary should have the keys of “role” and “content”, and an optional key of “name”. For vision LLMs, the value of “content” should be a list of dictionaries.

agentscope.tokens.count_gemini_tokens(model_name: str, messages: list[dict[str, str]]) int[源代码]

Count the number of tokens for the given Gemini model and messages.

参数:
  • model_name (str) – The name of the Gemini model, e.g. “gemini-1.5-pro”.

  • messages (list[dict[str, str]])

agentscope.tokens.count_dashscope_tokens(model_name: str, messages: list[dict[str, str]], api_key: str | None = None) int[源代码]

Count the number of tokens for the given Dashscope model and messages.

Note this function will call the Dashscope API to count the tokens. Refer to https://help.aliyun.com/zh/dashscope/developer-reference/token-api?spm=5176.28197632.console-base_help.dexternal.1c407e06Y2bQVB&disableWebsiteRedirect=true for more details.

参数:
  • model_name (str) – The name of the Dashscope model, e.g. “qwen-max”.

  • messages (list[dict[str, str]]) – The list of messages, each message is a dict with the key ‘text’.

  • api_key (Optional[str], defaults to None) – The API key for Dashscope. If None, the API key will be read from the environment variable DASHSCOPE_API_KEY.

返回:

The number of tokens.

返回类型:

int

agentscope.tokens.supported_models() list[str][源代码]

Get the list of supported models for token counting.

agentscope.tokens.register_model(model_name: str | list[str], tokens_count_func: Callable[[str, list[dict[str, str]]], int]) None[源代码]

Register a tokens counting function for the model(s) with the given name(s). If the model name is conflicting with the existing one, the new function will override the existing one.

参数:
  • model_name (Union[str, list[str]]) – The name of the model or a list of model names.

  • tokens_count_func (Callable[[str, list[dict[str, str]]], int]) – The tokens counting function for the model, which takes the model name and a list of dictionary messages as input and returns the number of tokens.

agentscope.tokens.count_huggingface_tokens(pretrained_model_name_or_path: str, messages: list[dict[str, str]], use_fast: bool = False, trust_remote_code: bool = False, enable_mirror: bool = False) int[源代码]

Count the number of tokens for the given HuggingFace model and messages.

参数:
  • pretrained_model_name_or_path (str) – The model name of path used in AutoTokenizer.from_pretrained.

  • messages (list[dict[str, str]]) – The list of messages, each message is a dictionary with keys “role” and “content”.

  • use_fast (bool, defaults to False) – Whether to use the fast tokenizer when loading the tokenizer.

  • trust_remote_code (bool, defaults to False) – Whether to trust the remote code in transformers’ AutoTokenizer.from_pretrained API.

  • enable_mirror (bool, defaults to False) – Whether to enable the HuggingFace mirror, which is useful for users in China.

返回:

The number of tokens.

返回类型:

int