agentscope.models.ollama_model
Model wrapper for Ollama models.
- class OllamaChatWrapper(config_name: str, model_name: str, stream: bool = False, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
OllamaWrapperBase
The model wrapper for Ollama chat API.
- Response:
Refer to
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion
{ "model": "registry.ollama.ai/library/llama3:latest", "created_at": "2023-12-12T14:13:43.416799Z", "message": { "role": "assistant", "content": "Hello! How are you today?" }, "done": true, "total_duration": 5191566416, "load_duration": 2154458, "prompt_eval_count": 26, "prompt_eval_duration": 383809000, "eval_count": 298, "eval_duration": 4799921000 }
- format(*args: Msg | list[Msg] | None, multi_agent_mode: bool = True) List[dict] [source]
Format the messages for ollama Chat API.
All messages will be formatted into a single system message with system prompt and conversation history.
Note: 1. This strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies. 2. For ollama chat api, the content field shouldn’t be empty string.
Example:
prompt = model.format( Msg("system", "You're a helpful assistant", role="system"), Msg("Bob", "Hi, how can I help you?", role="assistant"), Msg("user", "What's the date today?", role="user") )
The prompt will be as follows:
[ { "role": "system", "content": "You're a helpful assistant" }, { "role": "user", "content": ( "## Conversation History\n" "Bob: Hi, how can I help you?\n" "user: What's the date today?" ) } ]
- Parameters:
args (Union[Msg, list[Msg], None]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. The None input will be ignored.
multi_agent_mode (bool, defaults to True) – Formatting the messages in multi-agent mode or not. If false, the messages will be formatted in chat mode, where only a user and an assistant roles are involved.
- Returns:
The formatted messages.
- Return type:
List[dict]
- config_name: str
The name of the model configuration.
- keep_alive: str
Controls how long the model will stay loaded into memory following the request.
- model_name: str
The model name used in ollama API.
- model_type: str = 'ollama_chat'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- options: dict
A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}
- class OllamaEmbeddingWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
OllamaWrapperBase
The model wrapper for Ollama embedding API.
- Response:
Refer to
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings
{ "model": "all-minilm", "embeddings": [[ 0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.008599704, 0.105441414, -0.025878139, 0.12958129, ]] }
- config_name: str
The name of the model configuration.
- keep_alive: str
Controls how long the model will stay loaded into memory following the request.
- model_name: str
The model name used in ollama API.
- model_type: str = 'ollama_embedding'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- options: dict
A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}
- class OllamaGenerationWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
OllamaWrapperBase
The model wrapper for Ollama generation API.
- Response:
From
https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion
{ "model": "llama3", "created_at": "2023-08-04T19:22:45.499127Z", "response": "The sky is blue because it is the color of sky.", "done": true, "context": [1, 2, 3], "total_duration": 5043500667, "load_duration": 5025959, "prompt_eval_count": 26, "prompt_eval_duration": 325953000, "eval_count": 290, "eval_duration": 4709213000 }
- config_name: str
The name of the model configuration.
- keep_alive: str
Controls how long the model will stay loaded into memory following the request.
- model_name: str
The model name used in ollama API.
- model_type: str = 'ollama_generate'
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- options: dict
A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}
- class OllamaWrapperBase(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', host: str | None = None, **kwargs: Any)[source]
Bases:
ModelWrapperBase
,ABC
The base class for Ollama model wrappers.
To use Ollama API, please 1. First install ollama server from https://ollama.com/download and start the server 2. Pull the model by ollama pull {model_name} in terminal After that, you can use the ollama API.
- keep_alive: str
Controls how long the model will stay loaded into memory following the request.
- model_name: str
The model name used in ollama API.
- model_type: str
The type of the model wrapper, which is to identify the model wrapper class in model configuration.
- options: dict
A dict contains the options for ollama generation API, e.g. {“temperature”: 0, “seed”: 123}