.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "build_tutorial/prompt.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_build_tutorial_prompt.py: .. _prompt-engineering: Prompt Formatting ================================ AgentScope supports developers to build prompt that fits different model APIs by providing a set of built-in strategies for both chat and multi-agent scenarios. Specifically, AgentScope supports both model-specific and model-agnostic formatting. .. tip:: **Chat scenario** refers to the conversation between a user and an assistant, while **multi-agent scenario** involves multiple agents with different names (though their roles are all "assistant"). .. note:: Currently, most LLM API providers only support chat scenario. For example, only two roles (user and assistant) are involved in the conversation and sometimes they must speak alternatively. .. note:: There is no **one-size-fits-all** solution for prompt formatting. The goal of built-in strategies is to **enable beginners to smoothly invoke the model API, rather than achieving the best performance**. For advanced users, we highly recommend developers to customize prompts according to their needs and model API requirements. Model-Agnostic Formatting ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When you want your application to work with different model APIs simultaneously, the model-agnostic formatting is a good idea. AgentScope achieves model-agnostic formatting by supporting to load the model from configuration, and presets a collection of built-in formatting strategies for different model APIs and scenarios (chat or multi-agent) in the model wrapper class. You can directly use the `format` method of the model object to format the input messages without knowing the details of the model API. Taking DashScope Chat API as an example: .. GENERATED FROM PYTHON SOURCE LINES 45-77 .. code-block:: Python from typing import Union, Optional from agentscope.agents import AgentBase from agentscope.message import Msg from agentscope.manager import ModelManager import agentscope import json # Load the model configuration agentscope.init( model_configs={ "config_name": "my_qwen", "model_type": "dashscope_chat", "model_name": "qwen-max", }, ) # Get the model object from model manager model = ModelManager.get_instance().get_model_by_config_name("my_qwen") # `Msg` objects or a list of `Msg` objects can be passed to the `format` method prompt = model.format( Msg("system", "You're a helpful assistant.", "system"), [ Msg("assistant", "Hi!", "assistant"), Msg("user", "Nice to meet you!", "user"), ], multi_agent_mode=False, ) print(json.dumps(prompt, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none [ { "role": "system", "content": [ { "text": "You're a helpful assistant." } ] }, { "role": "assistant", "content": [ { "text": "Hi!" } ] }, { "role": "user", "content": [ { "text": "Nice to meet you!" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 78-80 After formatting the input messages, we can input the prompt into the model object. .. GENERATED FROM PYTHON SOURCE LINES 80-85 .. code-block:: Python response = model(prompt) print(response.text) .. rst-class:: sphx-glr-script-out .. code-block:: none Nice to meet you too! How can I assist you today? .. GENERATED FROM PYTHON SOURCE LINES 86-88 Also, you can use format the messages in the multi-agent scenario by setting `multi_agent_mode=True`. .. GENERATED FROM PYTHON SOURCE LINES 88-100 .. code-block:: Python prompt = model.format( Msg("system", "You're a helpful assistant named Alice.", "system"), [ Msg("Alice", "Hi!", "assistant"), Msg("Bob", "Nice to meet you!", "assistant"), ], multi_agent_mode=True, ) print(json.dumps(prompt, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none [ { "role": "system", "content": [ { "text": "You're a helpful assistant named Alice." } ] }, { "role": "user", "content": [ { "text": "## Conversation History\nAlice: Hi!\nBob: Nice to meet you!" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 101-102 Within the agent, the model-agnostic formatting is achieved as follows: .. GENERATED FROM PYTHON SOURCE LINES 102-125 .. code-block:: Python class MyAgent(AgentBase): def __init__(self, name: str, model_config_name: str, **kwargs) -> None: super().__init__(name=name, model_config_name=model_config_name) # ... def reply(self, x: Optional[Union[Msg, list[Msg]]] = None) -> Msg: # ... # Format the messages without knowing the model API prompt = self.model.format( Msg("system", "{your system prompt}", "system"), self.memory.get_memory(), multi_agent_mode=True, ) response = self.model(prompt) # ... return Msg(self.name, response.text, role="assistant") .. GENERATED FROM PYTHON SOURCE LINES 126-129 .. tip:: All the formatting strategies are implemented under `agentscope.formatter` module. The model wrapper decides which strategy to use based on the model name. .. GENERATED FROM PYTHON SOURCE LINES 131-139 Model-Specific Formatting ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The `agentscope.formatter` module implements the built-in formatting strategies for different model APIs and scenarios. They provide `format_chat` and `format_multi_agent` methods, as well as a `format_auto` function that automatically selects the appropriate method based on the input messages. .. GENERATED FROM PYTHON SOURCE LINES 139-156 .. code-block:: Python from agentscope.formatters import OpenAIFormatter multi_agent_messages = [ Msg("system", "You're a helpful assistant named Alice.", "system"), Msg("Alice", "Hi!", "assistant"), Msg("Bob", "Nice to meet you!", "assistant"), Msg("Charlie", "Nice to meet you, too!", "user"), ] chat_messages = [ Msg("system", "You're a helpful assistant named Alice.", "system"), Msg("Bob", "Nice to meet you!", "user"), Msg("Alice", "Hi! How can I help you?", "assistant"), ] .. GENERATED FROM PYTHON SOURCE LINES 157-158 Multi-agent scenario: .. GENERATED FROM PYTHON SOURCE LINES 158-164 .. code-block:: Python formatted_multi_agent = OpenAIFormatter.format_multi_agent( multi_agent_messages, ) print(json.dumps(formatted_multi_agent, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none [ { "role": "system", "name": "system", "content": [ { "type": "text", "text": "You're a helpful assistant named Alice." } ] }, { "role": "assistant", "name": "Alice", "content": [ { "type": "text", "text": "Hi!" } ] }, { "role": "assistant", "name": "Bob", "content": [ { "type": "text", "text": "Nice to meet you!" } ] }, { "role": "user", "name": "Charlie", "content": [ { "type": "text", "text": "Nice to meet you, too!" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 165-166 Chat scenario: .. GENERATED FROM PYTHON SOURCE LINES 166-172 .. code-block:: Python formatted_chat = OpenAIFormatter.format_chat( chat_messages, ) print(json.dumps(formatted_chat, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none [ { "role": "system", "name": "system", "content": [ { "type": "text", "text": "You're a helpful assistant named Alice." } ] }, { "role": "user", "name": "Bob", "content": [ { "type": "text", "text": "Nice to meet you!" } ] }, { "role": "assistant", "name": "Alice", "content": [ { "type": "text", "text": "Hi! How can I help you?" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 173-174 Auto formatting when only two entities are involved: .. GENERATED FROM PYTHON SOURCE LINES 174-180 .. code-block:: Python formatted_auto_chat = OpenAIFormatter.format_auto( chat_messages, ) print(json.dumps(formatted_auto_chat, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none [ { "role": "system", "name": "system", "content": [ { "type": "text", "text": "You're a helpful assistant named Alice." } ] }, { "role": "user", "name": "Bob", "content": [ { "type": "text", "text": "Nice to meet you!" } ] }, { "role": "assistant", "name": "Alice", "content": [ { "type": "text", "text": "Hi! How can I help you?" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 181-182 Auto formatting when more than two entities (multi-agent) are involved: .. GENERATED FROM PYTHON SOURCE LINES 182-188 .. code-block:: Python formatted_auto_multi_agent = OpenAIFormatter.format_auto( multi_agent_messages, ) print(json.dumps(formatted_auto_multi_agent, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none [ { "role": "system", "name": "system", "content": [ { "type": "text", "text": "You're a helpful assistant named Alice." } ] }, { "role": "assistant", "name": "Alice", "content": [ { "type": "text", "text": "Hi!" } ] }, { "role": "assistant", "name": "Bob", "content": [ { "type": "text", "text": "Nice to meet you!" } ] }, { "role": "user", "name": "Charlie", "content": [ { "type": "text", "text": "Nice to meet you, too!" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 189-190 The available formatter classes are: .. GENERATED FROM PYTHON SOURCE LINES 190-199 .. code-block:: Python from agentscope.formatters import ( CommonFormatter, AnthropicFormatter, OpenAIFormatter, GeminiFormatter, DashScopeFormatter, ) .. GENERATED FROM PYTHON SOURCE LINES 200-208 The `CommonFormatter` is a basic formatter for common chat LLMs, such as ZhipuAI API, Yi API, ollama, LiteLLM, etc. Vision Models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For vision models, AgentScope currently supports OpenAI, DashScope and Anthropic vision models. .. GENERATED FROM PYTHON SOURCE LINES 208-229 .. code-block:: Python from agentscope.message import TextBlock, ImageBlock # we create a fake image locally with open("./image.jpg", "w") as f: f.write("fake image") multi_modal_messages = [ Msg("system", "You're a helpful assistant named Alice.", "system"), Msg( "Alice", [ TextBlock(type="text", text="Help me to describe the two images?"), ImageBlock(type="image", url="https://example.com/image.jpg"), ImageBlock(type="image", url="./image.jpg"), ], "user", ), Msg("Bob", "Sure!", "assistant"), ] .. GENERATED FROM PYTHON SOURCE LINES 230-234 .. code-block:: Python print("OpenAI prompt:") openai_prompt = OpenAIFormatter.format_chat(multi_modal_messages) print(json.dumps(openai_prompt, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none OpenAI prompt: [ { "role": "system", "name": "system", "content": [ { "type": "text", "text": "You're a helpful assistant named Alice." } ] }, { "role": "user", "name": "Alice", "content": [ { "type": "text", "text": "Help me to describe the two images?" }, { "type": "image_url", "image_url": { "url": "https://example.com/image.jpg" } }, { "type": "image_url", "image_url": { "url": "data:image/jpg;base64,ZmFrZSBpbWFnZQ==" } } ] }, { "role": "assistant", "name": "Bob", "content": [ { "type": "text", "text": "Sure!" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 236-240 .. code-block:: Python print("\nDashscope prompt:") dashscope_prompt = DashScopeFormatter.format_chat(multi_modal_messages) print(json.dumps(dashscope_prompt, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none Dashscope prompt: [ { "role": "system", "content": [ { "text": "You're a helpful assistant named Alice." } ] }, { "role": "user", "content": [ { "text": "Help me to describe the two images?" }, { "image": "https://example.com/image.jpg" }, { "image": "./image.jpg" } ] }, { "role": "assistant", "content": [ { "text": "Sure!" } ] } ] .. GENERATED FROM PYTHON SOURCE LINES 242-245 .. code-block:: Python print("\nAnthropic prompt:") anthropic_prompt = AnthropicFormatter.format_chat(multi_modal_messages) print(json.dumps(anthropic_prompt, indent=4, ensure_ascii=False)) .. rst-class:: sphx-glr-script-out .. code-block:: none Anthropic prompt: [ { "role": "system", "content": [ { "type": "text", "text": "You're a helpful assistant named Alice." } ] }, { "role": "user", "content": [ { "type": "text", "text": "Help me to describe the two images?" }, { "type": "image", "source": "https://example.com/image.jpg" }, { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "data:image/jpeg;base64,ZmFrZSBpbWFnZQ==" } } ] }, { "role": "assistant", "content": [ { "type": "text", "text": "Sure!" } ] } ] .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 1.360 seconds) .. _sphx_glr_download_build_tutorial_prompt.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: prompt.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: prompt.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: prompt.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_