agentscope.models.gemini_model module

Google Gemini model wrapper.

class agentscope.models.gemini_model.GeminiWrapperBase(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any)[source]

Bases: ModelWrapperBase, ABC

The base class for Google Gemini model wrapper.

__init__(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any) None[source]

Initialize the wrapper for Google Gemini model.

Parameters:
  • model_name (str) – The name of the model.

  • api_key (str, defaults to None) – The api_key for the model. If it is not provided, it will be loaded from environment variable.

list_models() Sequence[source]

List all available models for this API calling.

class agentscope.models.gemini_model.GeminiChatWrapper(config_name: str, model_name: str, api_key: str | None = None, stream: bool = False, **kwargs: Any)[source]

Bases: GeminiWrapperBase

The wrapper for Google Gemini chat model, e.g. gemini-pro

model_type: str = 'gemini_chat'

The type of the model, which is used in model configuration.

generation_method = 'generateContent'

The generation method used in __call__ function.

__init__(config_name: str, model_name: str, api_key: str | None = None, stream: bool = False, **kwargs: Any) None[source]

Initialize the wrapper for Google Gemini model.

Parameters:
  • model_name (str) – The name of the model.

  • api_key (str, defaults to None) – The api_key for the model. If it is not provided, it will be loaded from environment variable.

  • stream (bool, defaults to False) – Whether to use stream mode.

static format(*args: Msg | Sequence[Msg]) List[dict][source]

This function provide a basic prompting strategy for Gemini Chat API in multi-party conversation, which combines all input into a single string, and wrap it into a user message.

We make the above decision based on the following constraints of the Gemini generate API:

1. In Gemini generate_content API, the role field must be either user or model.

2. If we pass a list of messages to the generate_content API, the user role must speak in the beginning and end of the messages, and user and model must alternative. This prevents us to build a multi-party conversations, where model may keep speaking in different names.

The above information is updated to 2024/03/21. More information about the Gemini generate_content API can be found in https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini

Based on the above considerations, we decide to combine all messages into a single user message. This is a simple and straightforward strategy, if you have any better ideas, pull request and discussion are welcome in our GitHub repository https://github.com/agentscope/agentscope!

Parameters:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

Returns:

A list with one user message.

Return type:

List[dict]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

class agentscope.models.gemini_model.GeminiEmbeddingWrapper(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any)[source]

Bases: GeminiWrapperBase

The wrapper for Google Gemini embedding model, e.g. models/embedding-001

Response:

```json {

“embeddings”: [
{

object (ContentEmbedding)

}

]

config_name: str

The name of the model configuration.

model_name: str

The name of the model, which is used in model api calling.

model_type: str = 'gemini_embedding'

The type of the model, which is used in model configuration.