agentscope.embedding

The embedding module in agentscope.

class EmbeddingModelBase[source]

Bases: object

Base class for embedding models.

supported_modalities: list[str]

The supported data modalities, e.g. “text”, “image”, “video”.

__init__(model_name, dimensions)[source]

Initialize the embedding model base class.

Parameters:
  • model_name (str) – The name of the embedding model.

  • dimensions (int) – The dimension of the embedding vector.

Return type:

None

model_name: str

The embedding model name

dimensions: int

The dimensions of the embedding vector.

async __call__(*args, **kwargs)[source]

Call the embedding API with the given arguments.

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

EmbeddingResponse

class EmbeddingUsage[source]

Bases: DictMixin

The usage of an embedding model API invocation.

time: float

The time used in seconds.

__init__(time, tokens=<factory>, type=<factory>)
Parameters:
  • time (float)

  • tokens (int | None)

  • type (Literal['embedding'])

Return type:

None

tokens: int | None

The number of tokens used, if available.

type: Literal['embedding']

The type of the usage, must be embedding.

class EmbeddingResponse[source]

Bases: DictMixin

The embedding response class.

embeddings: List[List[float]]

The embedding data

id: str

The identity of the embedding response

created_at: str

The timestamp of the embedding response creation

__init__(embeddings, id=<factory>, created_at=<factory>, type=<factory>, usage=<factory>, source=<factory>)
Parameters:
  • embeddings (List[List[float]])

  • id (str)

  • created_at (str)

  • type (Literal['embedding'])

  • usage (EmbeddingUsage | None)

  • source (Literal['cache', 'api'])

Return type:

None

type: Literal['embedding']

The type of the response, must be embedding.

usage: EmbeddingUsage | None

The usage of the embedding model API invocation, if available.

source: Literal['cache', 'api']

If the response comes from the cache or the API.

class DashScopeTextEmbedding[source]

Bases: EmbeddingModelBase

DashScope text embedding API class.

Note

From the `official documentation

<https://bailian.console.aliyun.com/?tab=api#/api/?type=model&url=2712515>`_:

  • The max batch size that DashScope text embedding API

supports is 10 for text-embedding-v4 and text-embedding-v3 models, and 25 for text-embedding-v2 and text-embedding-v1 models. - The max token limit for a single input is 8192 tokens for v4 and v3 models, and 2048 tokens for v2 and v1 models.

supported_modalities: list[str] = ['text']

This class only supports text input.

__init__(api_key, model_name, dimensions=1024, embedding_cache=None)[source]

Initialize the DashScope text embedding model class.

Parameters:
  • api_key (str) – The dashscope API key.

  • model_name (str) – The name of the embedding model.

  • dimensions (int, defaults to 1024) – The dimension of the embedding vector, refer to the official documentation for more details.

  • embedding_cache (EmbeddingCacheBase) – The embedding cache class instance, used to cache the embedding results to avoid repeated API calls.

Return type:

None

async __call__(text, **kwargs)[source]

Call the DashScope embedding API.

Parameters:
  • text (List[str | TextBlock]) – The input text to be embedded. It can be a list of strings.

  • kwargs (Any)

Return type:

EmbeddingResponse

class DashScopeMultiModalEmbedding[source]

Bases: EmbeddingModelBase

The DashScope multimodal embedding API, supporting text, image and video embedding.

supported_modalities: list[str] = ['text', 'image', 'video']

This class supports text, image and video input.

__init__(api_key, model_name, dimensions=None, embedding_cache=None)[source]

Initialize the DashScope multimodal embedding model class.

Parameters:
  • api_key (str) – The dashscope API key.

  • model_name (str) – The name of the embedding model, e.g. “multimodal-embedding- v1”, “tongyi-embedding-vision-plus”.

  • dimensions (int, defaults to 1024) –

    The dimension of the embedding vector, refer to the official documentation for more details.

  • embedding_cache (EmbeddingCacheBase) – The embedding cache class instance, used to cache the embedding results to avoid repeated API calls.

Return type:

None

async __call__(inputs, **kwargs)[source]

Call the DashScope multimodal embedding API, which accepts text, image, and video data.

Parameters:
  • inputs (list[TextBlock | ImageBlock | VideoBlock]) – The input data to be embedded. It can be a list of text, image, and video blocks.

  • kwargs (Any)

Returns:

The embedding response object, which contains the embeddings and usage information.

Return type:

EmbeddingResponse

class OpenAITextEmbedding[source]

Bases: EmbeddingModelBase

OpenAI text embedding model class.

supported_modalities: list[str] = ['text']

This class only supports text input.

__init__(api_key, model_name, dimensions=1024, embedding_cache=None, **kwargs)[source]

Initialize the OpenAI text embedding model class.

Parameters:
  • api_key (str) – The OpenAI API key.

  • model_name (str) – The name of the embedding model.

  • dimensions (int, defaults to 1024) – The dimension of the embedding vector.

  • embedding_cache (EmbeddingCacheBase | None, defaults to None) – The embedding cache class instance, used to cache the embedding results to avoid repeated API calls.

  • kwargs (Any)

Return type:

None

# TODO: handle batch size limit and token limit

async __call__(text, **kwargs)[source]

Call the OpenAI embedding API.

Parameters:
  • text (List[str | TextBlock]) – The input text to be embedded. It can be a list of strings.

  • kwargs (Any)

Return type:

EmbeddingResponse

class GeminiTextEmbedding[source]

Bases: EmbeddingModelBase

The Gemini text embedding model.

supported_modalities: list[str] = ['text']

This class only supports text input.

__init__(api_key, model_name, dimensions=3072, embedding_cache=None, **kwargs)[source]

Initialize the Gemini text embedding model class.

Parameters:
  • api_key (str) – The Gemini API key.

  • model_name (str) – The name of the embedding model.

  • dimensions (int, defaults to 3072) –

    The dimension of the embedding vector, refer to the official documentation for more details.

  • embedding_cache (EmbeddingCacheBase | None, defaults to None) – The embedding cache class instance, used to cache the embedding results to avoid repeated API calls.

  • kwargs (Any)

Return type:

None

async __call__(text, **kwargs)[source]

The Gemini embedding API call.

Parameters:
  • text (List[str | TextBlock]) – The input text to be embedded. It can be a list of strings.

  • kwargs (Any)

Return type:

EmbeddingResponse

# TODO: handle the batch size limit

class OllamaTextEmbedding[source]

Bases: EmbeddingModelBase

The Ollama embedding model.

supported_modalities: list[str] = ['text']

This class only supports text input.

__init__(model_name, dimensions, host=None, embedding_cache=None, **kwargs)[source]

Initialize the Ollama text embedding model class.

Parameters:
  • model_name (str) – The name of the embedding model.

  • dimensions (int) – The dimension of the embedding vector, the parameter should be provided according to the model used.

  • host (str | None, defaults to None) – The host URL for the Ollama API.

  • embedding_cache (EmbeddingCacheBase | None, defaults to None) – The embedding cache class instance, used to cache the embedding results to avoid repeated API calls.

  • kwargs (Any)

Return type:

None

async __call__(text, **kwargs)[source]

Call the Ollama embedding API.

Parameters:
  • text (List[str | TextBlock]) – The input text to be embedded. It can be a list of strings.

  • kwargs (Any)

Return type:

EmbeddingResponse

class EmbeddingCacheBase[source]

Bases: object

Base class for embedding caches, which is responsible for storing and retrieving embeddings.

abstract async store(embeddings, identifier, overwrite=False, **kwargs)[source]

Store the embeddings with the given identifier.

Parameters:
  • embeddings (List[Embedding]) – The embeddings to store.

  • identifier (JSONSerializableObject) – The identifier to distinguish the embeddings.

  • overwrite (bool, defaults to False) – Whether to overwrite existing embeddings with the same identifier. If True, existing embeddings will be replaced.

  • kwargs (Any)

Return type:

None

abstract async retrieve(identifier)[source]

Retrieve the embeddings with the given identifier. If not found, return None.

Parameters:

identifier (JSONSerializableObject) – The identifier to retrieve the embeddings.

Return type:

List[List[float]] | None

abstract async remove(identifier)[source]

Remove the embeddings with the given identifier.

Parameters:

identifier (JSONSerializableObject) – The identifier to remove the embeddings.

Return type:

None

abstract async clear()[source]

Clear all cached embeddings.

Return type:

None

class FileEmbeddingCache[source]

Bases: EmbeddingCacheBase

The embedding cache class that stores each embeddings vector in binary files.

__init__(cache_dir='./.cache/embeddings', max_file_number=None, max_cache_size=None)[source]

Initialize the file embedding cache class.

Parameters:
  • cache_dir (str, defaults to “./.cache/embeddings”) – The directory to store the embedding files.

  • max_file_number (int | None, defaults to None) – The maximum number of files to keep in the cache directory. If exceeded, the oldest files will be removed.

  • max_cache_size (int | None, defaults to None) – The maximum size of the cache directory in MB. If exceeded, the oldest files will be removed until the size is within the limit.

Return type:

None

property cache_dir: str

The cache directory where the embedding files are stored.

async store(embeddings, identifier, overwrite=False, **kwargs)[source]

Store the embeddings with the given identifier.

Parameters:
  • embeddings (List[Embedding]) – The embeddings to store.

  • identifier (JSONSerializableObject) – The identifier to distinguish the embeddings, which will be used to generate a hashable filename, so it should be JSON serializable (e.g. a string, number, list, dict).

  • overwrite (bool, defaults to False) – Whether to overwrite existing embeddings with the same identifier. If True, existing embeddings will be replaced.

  • kwargs (Any)

Return type:

None

async retrieve(identifier)[source]

Retrieve the embeddings with the given identifier. If not found, return None.

Parameters:

identifier (JSONSerializableObject) – The identifier to retrieve the embeddings, which will be used to generate a hashable filename, so it should be JSON serializable (e.g. a string, number, list, dict).

Return type:

List[List[float]] | None

async remove(identifier)[source]

Remove the embeddings with the given identifier.

Parameters:

identifier (JSONSerializableObject) – The identifiers to remove the embeddings, which will be used to generate a hashable filename, so it should be JSON serializable (e.g. a string, number, list, dict).

Return type:

None

async clear()[source]

Clear the cache directory by removing all files.

Return type:

None