Source code for agentscope.service.text_processing.summarization
# -*- coding: utf-8 -*-"""Service for text processing"""fromloguruimportloggerfromagentscope.modelsimportModelWrapperBasefromagentscope.service.service_statusimportServiceExecStatusfromagentscope.service.service_responseimportServiceResponsefromagentscope.messageimportMsgfromagentscope.constantsimport_DEFAULT_SYSTEM_PROMPTfromagentscope.constantsimport_DEFAULT_TOKEN_LIMIT_PROMPT
[docs]defsummarization(model:ModelWrapperBase,text:str,system_prompt:str=_DEFAULT_SYSTEM_PROMPT,max_return_token:int=-1,token_limit_prompt:str=_DEFAULT_TOKEN_LIMIT_PROMPT,)->ServiceResponse:"""Summarize the input text. Summarization function (Notice: current version of token limitation is built with Open AI API) Args: model (`ModelWrapperBase`): Model used to summarize provided text. text (`str`): Text to be summarized by the model. system_prompt (`str`, defaults to `_DEFAULT_SYSTEM_PROMPT`): Prompts as instruction for the system, will be as an instruction for the model. max_return_token (`int`, defaults to `-1`): Whether provide additional prompting instruction to limit the number of tokens in summarization returned by the model. token_limit_prompt (`str`, defaults to `_DEFAULT_TOKEN_LIMIT_PROMPT`): Prompt to instruct the model follow token limitation. Returns: `ServiceResponse`: If the model successfully summarized the text, and the summarization satisfies the provided token limitation, return `ServiceResponse` with `ServiceExecStatus.SUCCESS`; otherwise return `ServiceResponse` with `ServiceExecStatus.ERROR` (if the summary is return successfully but exceed the token limits, the content contains the summary as well). Example: The default message with `text` to be summarized: .. code-block:: python [ { "role": "system", "name": "system", "content": "You are a helpful agent to summarize the text.\\ You need to keep all the key information of the text in the\\ summary." }, { "role": "user", "name": "user", "content": text }, ] Messages will be processed by model.format() before feeding to models. """ifmax_return_token>0:system_prompt+=token_limit_prompt.format(max_return_token)try:msgs=[Msg(name="system",role="system",content=system_prompt),Msg(name="user",role="user",content=text),]msgs=model.format(msgs)model_output=model(msgs)summary=model_output.textreturnServiceResponse(ServiceExecStatus.SUCCESS,content=summary,)exceptValueErrorase:logger.exception(e)returnServiceResponse(ServiceExecStatus.ERROR,content=f"Summarization by model {model.model} fail",)