前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >任务拆解,悠然自得,自动版本的ChatGPT,AutoGPT自动人工智能AI任务实践(Python3.10)

任务拆解,悠然自得,自动版本的ChatGPT,AutoGPT自动人工智能AI任务实践(Python3.10)

作者头像
用户9127725
发布2023-04-28 17:05:06
5490
发布2023-04-28 17:05:06
举报
文章被收录于专栏:刘悦的技术博客

当我们使用ChatGPT完成某些工作的时候,往往需要多轮对话,比如让ChatGPT分析、翻译、总结一篇网上的文章或者文档,再将总结的结果以文本的形式存储在本地。过程中免不了要和ChatGPT“折冲樽俎”一番,事实上,这个“交涉”的过程也可以自动化,AutoGPT可以帮助我们自动拆解任务,没错,程序能做到的事情,人类绝不亲力亲为。

    我们唯一需要做的,就是告诉AutoGPT一个任务目标,AutoGPT会自动根据任务目标将任务拆解成一个个的小任务,并且逐个完成,简单且高效。

    配置AutoGPT

    先确保本地环境安装好了Python3.10.9

    接着运行Git命令拉取项目:

代码语言:javascript
复制
git clone https://github.com/Significant-Gravitas/Auto-GPT.git

    随后进入项目的目录:

代码语言:javascript
复制
cd Auto-GPT

    安装相关的依赖库:

代码语言:javascript
复制
pip3 install -r requirements.txt

    安装成功后,复制一下项目的配置文件:

代码语言:javascript
复制
cp .env.template .env

    这里通过cp命令将配置文件模版.env.template复制成为一个新的配置文件.env。

    随后将Openai的秘钥填入配置文件:

代码语言:javascript
复制
### OPENAI
# OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
# TEMPERATURE - Sets temperature in OpenAI (Default: 0)
# USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=您的秘钥
TEMPERATURE=0
USE_AZURE=False

    除了Openai官方的接口秘钥。

    当然了,如果不想在本地装那么多依赖,也可以通过Docker来构建Auto-GPT的容器:

代码语言:javascript
复制
docker build -t autogpt .
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_workspace autogpt

    这里Docker会自动读取项目中的Dockerfile配置文件进行构建,相当方便。

    至此,Auto-GPT就配置好了。

    运行Auto-GPT

    在项目根目录运行命令:

代码语言:javascript
复制
python3 -m autogpt --debug

    即可启动AutoGPT:

代码语言:javascript
复制
➜  Auto-GPT git:(master) python -m autogpt --debug       
Warning: The file 'AutoGpt.json' does not exist. Local memory would not be saved to a file.
Debug Mode:  ENABLED
Welcome to Auto-GPT!  Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI:  For example, 'Entrepreneur-GPT'
AI Name:

    首先创建AutoGPT机器人的名字:

代码语言:javascript
复制
AI Name: v3u.cn
v3u.cn here!  I am at your service.
Describe your AI's role:  For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
v3u.cn is:

    创建好名字以后,Auto-GPT就可以随时为您效劳了。

    首先为AutoGPT设置目标:

代码语言:javascript
复制
v3u.cn is: Analyze the contents of this article,the url is https://v3u.cn/a_id_303,and write the result to goal.txt

    这里我们要求AutoGPT分析并且总结v3u.cn/a_id_303这篇文章,并且将分析结果写入本地的goal.txt文件。

    程序返回:

代码语言:javascript
复制
Enter up to 5 goals for your AI:  For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: 
Using memory of type:  LocalCache

    AutoGPT会告诉你可以最多拆解为五个任务,我们可以自己拆解,也可以让机器人帮助我们拆解,直接按回车,让AutoGPT自动拆解任务即可。

    接着程序会自动爬取这篇文章的内容,然后使用gpt-3.5-turbo模型来进行分析:

代码语言:javascript
复制
Goal 1: 
Using memory of type:  LocalCache
Using Browser:  chrome
  Token limit: 4000
  Memory Stats: (0, (0, 1536))
  Token limit: 4000
  Send Token Count: 936
  Tokens remaining for response: 3064
  ------------ CONTEXT SENT TO AI ---------------
  System: The current time and date is Mon Apr 17 20:29:37 2023
  
  System: This reminds you of these events from your past:



  
  User: Determine which next command to use, and respond using the format specified above:
  
  ----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 3064
The JSON object is valid.
 THOUGHTS:  Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.
REASONING:  Browsing the article will allow us to analyze its contents and determine the appropriate next steps.
PLAN: 
-  Browse the article
-  Analyze its contents
-  Determine the appropriate next steps
CRITICISM:  None
NEXT ACTION:  COMMAND = browse_website ARGUMENTS = {'url': 'https://v3u.cn/a_id_303', 'question': 'analyze the contents of the article'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= 
[WDM] - Downloading: 100%|████████████████████████████████████████████| 8.04M/8.04M [00:03<00:00, 2.42MB/s]
Text length: 6977 characters
Adding chunk 1 / 1 to memory
Summarizing chunk 1 / 1
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 300
SYSTEM:  Command browse_website returned: Error: This model's maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion.
  Token limit: 4000
  Memory Stats: (2, (2, 1536))
  Token limit: 4000
  Send Token Count: 1472
  Tokens remaining for response: 2528
  ------------ CONTEXT SENT TO AI ---------------
  System: The current time and date is Mon Apr 17 20:30:19 2023
  
  System: This reminds you of these events from your past:
['Assistant Reply: {\n    "thoughts": {\n        "text": "Let\'s start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.",\n        "reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.",\n        "plan": "- Browse the article\\n- Analyze its contents\\n- Determine the appropriate next steps",\n        "criticism": "None",\n        "speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."\n    },\n    "command": {\n        "name": "browse_website",\n        "args": {\n            "url": "https://v3u.cn/a_id_303",\n            "question": "analyze the contents of the article"\n        }\n    }\n} \nResult: Command browse_website returned: Error: This model\'s maximum context length is 4097 tokens. However, you requested 4339 tokens (4039 in the messages, 300 in the completion). Please reduce the length of the messages or completion. \nHuman Feedback: GENERATE NEXT COMMAND JSON ']


  
  User: Determine which next command to use, and respond using the format specified above:
  
  Assistant: {
    "thoughts": {
        "text": "Let's start by browsing the article at https://v3u.cn/a_id_303 to get a better understanding of its contents.",
        "reasoning": "Browsing the article will allow us to analyze its contents and determine the appropriate next steps.",
        "plan": "- Browse the article\n- Analyze its contents\n- Determine the appropriate next steps",
        "criticism": "None",
        "speak": "I suggest we start by browsing the article at the given URL to analyze its contents and determine the appropriate next steps."
    },
    "command": {
        "name": "browse_website",
        "args": {
            "url": "https://v3u.cn/a_id_303",
            "question": "analyze the contents of the article"
        }
    }
}
  
  
  User: Determine which next command to use, and respond using the format specified above:
  
  ----------- END OF CONTEXT ----------------
Creating chat completion with model gpt-3.5-turbo, temperature 0.0, max_tokens 2528

    最后将分析结果写入goal.txt文件:

代码语言:javascript
复制
这篇文章主要阐释了苹果Mac电脑可以完成机器学习和深度学习任务,并且通过深度学习框架Tensorflow的安装和运行进行了佐证,同时也对Tensorflow的CPU和GPU的两种模型训练模式进行了深度对比和测试。

    一气呵成,流畅丝滑。

    结语

    AutoGPT和其他 AI 程序的不同之处在于,它专门专注于在无需人工干预的情况下生成提示和自动执行多步骤任务。它还具有扫描互联网或在用户计算机上执行命令以获取信息的能力,这使其有别于可能仅依赖于预先存在的数据集的其他人工智能程序。

    AutoGPT的底层逻辑并不复杂:先通过搜索引擎检索任务,然后把结果和目标丢给gpt让它给出序列化方案json,再把方案分段丢给gpt,最后用shell去创建Python文件+json.load并且执行,是一个反复递归的过程。

    不能否认的是,虽然实现逻辑简单,但这无疑是一种“自我进化”的过程,相信随着时间的推移,AutoGPT可以更好地处理愈加复杂的任务。

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2023-04-19,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  •     配置AutoGPT
  •     运行Auto-GPT
  •     结语
相关产品与服务
容器服务
腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档