agentscope.service.text_processing.summarization module
Service for text processing
- agentscope.service.text_processing.summarization.summarization(model: ModelWrapperBase, text: str, system_prompt: str = '\nYou are a helpful agent to summarize the text.\nYou need to keep all the key information of the text in the summary.\n', max_return_token: int = -1, token_limit_prompt: str = '\nSummarize the text after TEXT in less than {} tokens:\n') ServiceResponse [source]
Summarize the input text.
Summarization function (Notice: current version of token limitation is built with Open AI API)
- Parameters:
model (ModelWrapperBase) – Model used to summarize provided text.
text (str) – Text to be summarized by the model.
system_prompt (str, defaults to _DEFAULT_SYSTEM_PROMPT) – Prompts as instruction for the system, will be as an instruction for the model.
max_return_token (int, defaults to -1) – Whether provide additional prompting instruction to limit the number of tokens in summarization returned by the model.
token_limit_prompt (str, defaults to _DEFAULT_TOKEN_LIMIT_PROMPT) – Prompt to instruct the model follow token limitation.
- Returns:
If the model successfully summarized the text, and the summarization satisfies the provided token limitation, return ServiceResponse with ServiceExecStatus.SUCCESS; otherwise return ServiceResponse with ServiceExecStatus.ERROR (if the summary is return successfully but exceed the token limits, the content contains the summary as well).
- Return type:
ServiceResponse
Example:
The default message with text to be summarized:
[ { "role": "system", "name": "system", "content": "You are a helpful agent to summarize the text.\ You need to keep all the key information of the text in the\ summary." }, { "role": "user", "name": "user", "content": text }, ]
Messages will be processed by model.format() before feeding to models.