agentscope.models.ollama_model module

Model wrapper for Ollama models.

class agentscope.models.ollama_model.OllamaWrapperBase(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[源代码]

基类:ModelWrapperBase, ABC

The base class for Ollama model wrappers.

To use Ollama API, please 1. First install ollama server from https://ollama.com/download and start the server 2. Pull the model by ollama pull {model_name} in terminal After that, you can use the ollama API.

model_type: str

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

model_name: str

The model name used in ollama API.

__init__(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any) None[源代码]

Initialize the model wrapper for Ollama API.

参数:
  • model_name (str) – The model name used in ollama API.

  • options (dict, default None) – The extra keyword arguments used in Ollama api generation, e.g. {“temperature”: 0., “seed”: 123}.

  • keep_alive (str, default 5m) – Controls how long the model will stay loaded into memory following the request.

  • host (str, default None) – The host port of the ollama server. Defaults to None, which is 127.0.0.1:11434.

options: dict

A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}

keep_alive: str

Controls how long the model will stay loaded into memory following the request.

class agentscope.models.ollama_model.OllamaChatWrapper(config_name: str, model_name: str, stream: bool = False, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[源代码]

基类:OllamaWrapperBase

The model wrapper for Ollama chat API.

Response:
  • Refer to

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion

```json {

“model”: “registry.ollama.ai/library/llama3:latest”, “created_at”: “2023-12-12T14:13:43.416799Z”, “message”: {

“role”: “assistant”, “content”: “Hello! How are you today?”

}, “done”: true, “total_duration”: 5191566416, “load_duration”: 2154458, “prompt_eval_count”: 26, “prompt_eval_duration”: 383809000, “eval_count”: 298, “eval_duration”: 4799921000

model_type: str = 'ollama_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

__init__(config_name: str, model_name: str, stream: bool = False, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any) None[源代码]

Initialize the model wrapper for Ollama API.

参数:
  • model_name (str) – The model name used in ollama API.

  • stream (bool, default False) – Whether to enable stream mode.

  • options (dict, default None) – The extra keyword arguments used in Ollama api generation, e.g. {“temperature”: 0., “seed”: 123}.

  • keep_alive (str, default 5m) – Controls how long the model will stay loaded into memory following the request.

  • host (str, default None) – The host port of the ollama server. Defaults to None, which is 127.0.0.1:11434.

format(*args: Msg | Sequence[Msg]) List[dict][源代码]

Format the messages for ollama Chat API.

All messages will be formatted into a single system message with system prompt and conversation history.

Note: 1. This strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies. 2. For ollama chat api, the content field shouldn’t be empty string.

Example:

prompt = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

[
    {
        "role": "system",
        "content": "You're a helpful assistant"
    },
    {
        "role": "user",
        "content": (
            "## Conversation History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
参数:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

model_name: str

The model name used in ollama API.

options: dict

A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}

keep_alive: str

Controls how long the model will stay loaded into memory following the request.

config_name: str

The name of the model configuration.

class agentscope.models.ollama_model.OllamaEmbeddingWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[源代码]

基类:OllamaWrapperBase

The model wrapper for Ollama embedding API.

Response:
  • Refer to

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings

```json {

“model”: “all-minilm”, “embeddings”: [[

0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.008599704, 0.105441414, -0.025878139, 0.12958129,

]]

model_type: str = 'ollama_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

model_name: str

The model name used in ollama API.

options: dict

A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}

keep_alive: str

Controls how long the model will stay loaded into memory following the request.

config_name: str

The name of the model configuration.

format(*args: Msg | Sequence[Msg]) List[dict] | str[源代码]

Format the input messages into the format that the model API required.

class agentscope.models.ollama_model.OllamaGenerationWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[源代码]

基类:OllamaWrapperBase

The model wrapper for Ollama generation API.

Response:
  • From

https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion

```json {

“model”: “llama3”, “created_at”: “2023-08-04T19:22:45.499127Z”, “response”: “The sky is blue because it is the color of the sky.”, “done”: true, “context”: [1, 2, 3], “total_duration”: 5043500667, “load_duration”: 5025959, “prompt_eval_count”: 26, “prompt_eval_duration”: 325953000, “eval_count”: 290, “eval_duration”: 4709213000

model_name: str

The model name used in ollama API.

options: dict

A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}

keep_alive: str

Controls how long the model will stay loaded into memory following the request.

config_name: str

The name of the model configuration.

model_type: str = 'ollama_generate'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: Msg | Sequence[Msg]) str[源代码]

Forward the input to the model.

参数:

args (Union[Msg, Sequence[Msg]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted string prompt.

返回类型:

str