分享一个开源的快速构建AI智能体应用框架Qwen-Agent

分享一个开源的快速构建AI智能体应用框架Qwen-Agent

800_auto

Qwen-Agent 构建在 Qwen2 之上,提供了一个用于开发 LLM 应用的框架。该框架具备功能调用、代码解释器、RAG 和 Chrome 扩展程序等特性。

项目包含了多个示例应用,如浏览器助手、代码解释器和自定义助手。用户可以通过 PyPI 安装稳定版本,也可以从源代码安装最新的开发版本。

项目还提供了使用 DashScope 提供的模型服务或自行部署的模型服务选项(例如ollama)。

开发者可以通过注册自定义工具来扩展框架的功能,并且可以创建自己的代理(Agent)来处理特定的任务。项目还提供了一个快速的 RAG 解决方案,用于处理超长文档的问答,并在两个挑战性的基准测试中超过了原生的长上下文模型。此外,项目还包括了一个名为 BrowserQwen 的浏览器助手,它是基于 Qwen-Agent 构建的。项目的代码解释器部分没有沙盒保护,因此不建议用于生产环境。

安装:

pip install -U qwen-agent

示例,创建一个agent进行文生图

import pprint
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.tools.base import BaseTool, register_tool


# Step 1 (Optional): Add a custom tool named `my_image_gen`.
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
    # The `description` tells the agent the functionality of this tool.
    description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'
    # The `parameters` tell the agent what input parameters the tool has.
    parameters = [{
        'name': 'prompt',
        'type': 'string',
        'description': 'Detailed description of the desired image content, in English',
        'required': True
    }]

    def call(self, params: str, **kwargs) -> str:
        # `params` are the arguments generated by the LLM agent.
        prompt = json5.loads(params)['prompt']
        prompt = urllib.parse.quote(prompt)
        return json5.dumps(
            {'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
            ensure_ascii=False)


# Step 2: Configure the LLM you are using.
llm_cfg = {
    # Use the model service provided by DashScope:
    'model': 'qwen-max',
    'model_server': 'dashscope',
    # 'api_key': 'YOUR_DASHSCOPE_API_KEY',
    # It will use the `DASHSCOPE_API_KEY' environment variable if 'api_key' is not set here.

    # Use a model service compatible with the OpenAI API, such as vLLM or Ollama:
    # 'model': 'Qwen2-7B-Chat',
    # 'model_server': 'http://localhost:8000/v1',  # base_url, also known as api_base
    # 'api_key': 'EMPTY',
    # (Optional) LLM hyperparameters for generation:
    'generate_cfg': {
        'top_p': 0.8
    }
}

# Step 3: Create an agent. Here we use the `Assistant` agent as an example, which is capable of using tools and reading files.
system_instruction = '''You are a helpful assistant.
After receiving the user's request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.'''
tools = ['my_image_gen', 'code_interpreter']  # `code_interpreter` is a built-in tool for executing code.
files = ['./examples/resource/doc.pdf']  # Give the bot a PDF file to read.
bot = Assistant(llm=llm_cfg,
                system_message=system_instruction,
                function_list=tools,
                files=files)

# Step 4: Run the agent as a chatbot.
messages = []  # This stores the chat history.
while True:
    # For example, enter the query "draw a dog and rotate it 90 degrees".
    query = input('user query: ')
    # Append the user query to the chat history.
    messages.append({'role': 'user', 'content': query})
    response = []
    for response in bot.run(messages=messages):
        # Streaming output.
        print('bot response:')
        pprint.pprint(response, indent=2)
    # Append the bot responses to the chat history.
    messages.extend(response)


800_auto

github:https://github.com/QwenLM/Qwen-Agent

{{collectdata}}

网友评论