agentscope.tokens module
The tokens interface for agentscope.
- agentscope.tokens.count(model_name: str, messages: list[dict[str, str]]) int [source]
Count the number of tokens for the given model and messages.
- Parameters:
model_name (str) – The name of the model.
messages (list[dict[str, str]]) – A list of dictionaries.
- agentscope.tokens.count_openai_tokens(model_name: str, messages: list[dict[str, str]]) int [source]
Count the number of tokens for the given OpenAI Chat model and messages.
Refer to https://platform.openai.com/docs/advanced-usage/managing-tokens
- Parameters:
model_name (str) – The name of the OpenAI Chat model, e.g. “gpt-4o”.
messages (list[dict[str, str]]) – A list of dictionaries. Each dictionary should have the keys of “role” and “content”, and an optional key of “name”. For vision LLMs, the value of “content” should be a list of dictionaries.
- agentscope.tokens.count_gemini_tokens(model_name: str, messages: list[dict[str, str]]) int [source]
Count the number of tokens for the given Gemini model and messages.
- Parameters:
model_name (str) – The name of the Gemini model, e.g. “gemini-1.5-pro”.
messages (list[dict[str, str]])
- agentscope.tokens.count_dashscope_tokens(model_name: str, messages: list[dict[str, str]], api_key: str | None = None) int [source]
Count the number of tokens for the given Dashscope model and messages.
Note this function will call the Dashscope API to count the tokens. Refer to https://help.aliyun.com/zh/dashscope/developer-reference/token-api?spm=5176.28197632.console-base_help.dexternal.1c407e06Y2bQVB&disableWebsiteRedirect=true for more details.
- Parameters:
model_name (str) – The name of the Dashscope model, e.g. “qwen-max”.
messages (list[dict[str, str]]) – The list of messages, each message is a dict with the key ‘text’.
api_key (Optional[str], defaults to None) – The API key for Dashscope. If None, the API key will be read from the environment variable DASHSCOPE_API_KEY.
- Returns:
The number of tokens.
- Return type:
int
- agentscope.tokens.supported_models() list[str] [source]
Get the list of supported models for token counting.
- agentscope.tokens.register_model(model_name: str | list[str], tokens_count_func: Callable[[str, list[dict[str, str]]], int]) None [source]
Register a tokens counting function for the model(s) with the given name(s). If the model name is conflicting with the existing one, the new function will override the existing one.
- Parameters:
model_name (Union[str, list[str]]) – The name of the model or a list of model names.
tokens_count_func (Callable[[str, list[dict[str, str]]], int]) – The tokens counting function for the model, which takes the model name and a list of dictionary messages as input and returns the number of tokens.
- agentscope.tokens.count_huggingface_tokens(pretrained_model_name_or_path: str, messages: list[dict[str, str]], use_fast: bool = False, trust_remote_code: bool = False, enable_mirror: bool = False) int [source]
Count the number of tokens for the given HuggingFace model and messages.
- Parameters:
pretrained_model_name_or_path (str) – The model name of path used in AutoTokenizer.from_pretrained.
messages (list[dict[str, str]]) – The list of messages, each message is a dictionary with keys “role” and “content”.
use_fast (bool, defaults to False) – Whether to use the fast tokenizer when loading the tokenizer.
trust_remote_code (bool, defaults to False) – Whether to trust the remote code in transformers’ AutoTokenizer.from_pretrained API.
enable_mirror (bool, defaults to False) – Whether to enable the HuggingFace mirror, which is useful for users in China.
- Returns:
The number of tokens.
- Return type:
int