2024/11/27 AutoGPT Platform v0.3.3
2024/11/23 Autogpt Platform Agent Blocks
官方出的雲端 Agent 代理平台,有好操作的 圖形化介面
等候入口:
https://agpt.co/waitlist
2024/06/26 AutoGPT-Code-Ability 開源專案,協助製作開發軟體
2024/05/14 AutoGPT 最新版本:v0.5.1
2023/12/14 AutoGPT v0.5.0 has just been launched
2023/10/12 最新版本 Auto-GPT v0.4.7
年初優先取得 GPT-4 API ,實測 AutoGPT 效果如何,所以有了這篇筆記整理。AutoGPT 0.3 版本之後不用設定五項目標,此專案幾乎是每天都有小更新,所以與 Youtube 教學影片不符之處請查閱官方 Github Repo 最新文件說明。以下皆為過往的學習資源紀錄,僅供參考。
*官方也是這樣子建議
DO NOT FOLLOW ANY OUTSIDE TUTORIALS AS THEY WILL LIKELY BE OUT OF DATE
https://github.com/Significant-Gravitas/Auto-GPT
Release Auto-GPT:
https://github.com/Significant-Gravitas/Auto-GPT/releases
Discord 社群: https://discord.com/invite/autogpt
申請 OpenAI API
https://openai.com/blog/openai-api
設定帳號 API
建議設付款上限提醒
系統環境要求
python 版本
官方安裝說明文件:
https://github.com/Significant-Gravitas/Auto-GPT
https://docs.agpt.co/autogpt/setup/
專案資料夾中有 Quickstart Guide 文件檔
執行 ./run setup
This will guide you through the process of setting up your system. 成功後會有此畫面
命名你的 GPT
./run agent create YOUR_AGENT_NAME
./run agent create FinetechGPT
➜ AutoGPT git:(master) ✗ ./run agent create FinetechGPT
🎉 New agent 'FinetechGPT' created. The code for your new agent is in: autogpts/FinetechGPT
🚀 If you would like to enter the arena, run './run arena enter FinetechGPT'
➜ AutoGPT git:(master) ✗
略
影音:
How To Install Auto-GPT On Mac OS (Run AutoGPT In Terminal)
https://youtu.be/uPODYPmYKVw
Auto-GPT 使用教程!Mac 上 AutoGPT 在地安裝部署教程!AutoGPT 怎麽用看這個就夠了!Install AutoGPT Locally(MacOS&Windows)|數字牧民LC
Auto GPT 的安裝與使用
https://youtu.be/gemX8RrOfdM
Auto-GPT正确安装法宝:手把手教你安装Auto-GPT,解锁6大独特功能 | 回到Axton
pip install -r requirements.txt
.env.template
cp .env.template .env.
可能已修正此坑?,可自行查看官方教學文件檔驗證
AutoGPT 0.3 大版本升级揭秘:极简Docker安装法+避坑指南 | 回到Axton
避坑:修改 docker-compose.yml
設定檔案
2023/10/12 配置檔更新
version: "3.9"
services:
auto-gpt:
image: significantgravitas/auto-gpt
env_file:
- .env
profiles: ["exclude-from-up"]
volumes:
- ./auto_gpt_workspace:/app/auto_gpt_workspace
- ./data:/app/data
## allow auto-gpt to write logs to disk
- ./logs:/app/logs
## uncomment following lines if you want to make use of these files
## you must have them existing in the same folder as this docker-compose.yml
#- type: bind
# source: ./azure.yaml
# target: /app/azure.yaml
#- type: bind
# source: ./ai_settings.yaml
# target: /app/ai_settings.yaml
PINECONE 撰寫當下要排隊申請
2023/07/03 更新:
it has been removed to not use Pinecone. Please use Redis or Local memory.
2023/11/08 配置檔更新
# For further descriptions of these settings see docs/configuration/options.md or go to docs.agpt.co
################################################################################
### AutoGPT - GENERAL SETTINGS
################################################################################
## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
#最重要的部分
OPENAI_API_KEY= 你自己的 OpenAI API key
## EXECUTE_LOCAL_COMMANDS - Allow local command execution (Default: False)
# EXECUTE_LOCAL_COMMANDS=False
## RESTRICT_TO_WORKSPACE - Restrict file operations to workspace ./auto_gpt_workspace (Default: True)
# RESTRICT_TO_WORKSPACE=True
## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
## AI_SETTINGS_FILE - Specifies which AI Settings file to use, relative to the AutoGPT root directory. (defaults to ai_settings.yaml)
# AI_SETTINGS_FILE=ai_settings.yaml
## PLUGINS_CONFIG_FILE - The path to the plugins_config.yaml file, relative to the AutoGPT root directory. (Default plugins_config.yaml)
# PLUGINS_CONFIG_FILE=plugins_config.yaml
## PROMPT_SETTINGS_FILE - Specifies which Prompt Settings file to use, relative to the AutoGPT root directory. (defaults to prompt_settings.yaml)
# PROMPT_SETTINGS_FILE=prompt_settings.yaml
## OPENAI_API_BASE_URL - Custom url for the OpenAI API, useful for connecting to custom backends. No effect if USE_AZURE is true, leave blank to keep the default url
# the following is an example:
# OPENAI_API_BASE_URL=http://localhost:443/v1
## OPENAI_FUNCTIONS - Enables OpenAI functions: https://platform.openai.com/docs/guides/gpt/function-calling
## WARNING: this feature is only supported by OpenAI's newest models. Until these models become the default on 27 June, add a '-0613' suffix to the model of your choosing.
# OPENAI_FUNCTIONS=False
## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y
## EXIT_KEY - Key to exit AutoGPT
# EXIT_KEY=n
## PLAIN_OUTPUT - Plain output, which disables the spinner (Default: False)
# PLAIN_OUTPUT=False
## DISABLED_COMMAND_CATEGORIES - The list of categories of commands that are disabled (Default: None)
# DISABLED_COMMAND_CATEGORIES=
################################################################################
### LLM PROVIDER
################################################################################
## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
# TEMPERATURE=0
## OPENAI_ORGANIZATION - Your OpenAI Organization key (Default: None)
# OPENAI_ORGANIZATION=
## USE_AZURE - Use Azure OpenAI or not (Default: False)
# USE_AZURE=False
## AZURE_CONFIG_FILE - The path to the azure.yaml file, relative to the AutoGPT root directory. (Default: azure.yaml)
# AZURE_CONFIG_FILE=azure.yaml
################################################################################
### LLM MODELS
################################################################################
## SMART_LLM - Smart language model (Default: gpt-4-0314)
# SMART_LLM=gpt-4-0314
## FAST_LLM - Fast language model (Default: gpt-3.5-turbo-16k)
# FAST_LLM=gpt-3.5-turbo-16k
## EMBEDDING_MODEL - Model to use for creating embeddings
# EMBEDDING_MODEL=text-embedding-ada-002
################################################################################
### SHELL EXECUTION
################################################################################
## SHELL_COMMAND_CONTROL - Whether to use "allowlist" or "denylist" to determine what shell commands can be executed (Default: denylist)
# SHELL_COMMAND_CONTROL=denylist
## ONLY if SHELL_COMMAND_CONTROL is set to denylist:
## SHELL_DENYLIST - List of shell commands that ARE NOT allowed to be executed by AutoGPT (Default: sudo,su)
# SHELL_DENYLIST=sudo,su
## ONLY if SHELL_COMMAND_CONTROL is set to allowlist:
## SHELL_ALLOWLIST - List of shell commands that ARE allowed to be executed by AutoGPT (Default: None)
# SHELL_ALLOWLIST=
################################################################################
### MEMORY
################################################################################
### General
## MEMORY_BACKEND - Memory backend type
# MEMORY_BACKEND=json_file
## MEMORY_INDEX - Value used in the Memory backend for scoping, naming, or indexing (Default: auto-gpt)
# MEMORY_INDEX=auto-gpt
### Redis
## REDIS_HOST - Redis host (Default: localhost, use "redis" for docker-compose)
# REDIS_HOST=localhost
## REDIS_PORT - Redis port (Default: 6379)
# REDIS_PORT=6379
## REDIS_PASSWORD - Redis password (Default: "")
# REDIS_PASSWORD=
## WIPE_REDIS_ON_START - Wipes data / index on start (Default: True)
# WIPE_REDIS_ON_START=True
################################################################################
### IMAGE GENERATION PROVIDER
################################################################################
### Common
## IMAGE_PROVIDER - Image provider (Default: dalle)
# IMAGE_PROVIDER=dalle
## IMAGE_SIZE - Image size (Default: 256)
# IMAGE_SIZE=256
### Huggingface (IMAGE_PROVIDER=huggingface)
## HUGGINGFACE_IMAGE_MODEL - Text-to-image model from Huggingface (Default: CompVis/stable-diffusion-v1-4)
# HUGGINGFACE_IMAGE_MODEL=CompVis/stable-diffusion-v1-4
## HUGGINGFACE_API_TOKEN - HuggingFace API token (Default: None)
# HUGGINGFACE_API_TOKEN=
### Stable Diffusion (IMAGE_PROVIDER=sdwebui)
## SD_WEBUI_AUTH - Stable Diffusion Web UI username:password pair (Default: None)
# SD_WEBUI_AUTH=
## SD_WEBUI_URL - Stable Diffusion Web UI API URL (Default: http://localhost:7860)
# SD_WEBUI_URL=http://localhost:7860
################################################################################
### AUDIO TO TEXT PROVIDER
################################################################################
## AUDIO_TO_TEXT_PROVIDER - Audio-to-text provider (Default: huggingface)
# AUDIO_TO_TEXT_PROVIDER=huggingface
## HUGGINGFACE_AUDIO_TO_TEXT_MODEL - The model for HuggingFace to use (Default: CompVis/stable-diffusion-v1-4)
# HUGGINGFACE_AUDIO_TO_TEXT_MODEL=CompVis/stable-diffusion-v1-4
################################################################################
### GITHUB
################################################################################
## GITHUB_API_KEY - Github API key / PAT (Default: None)
# GITHUB_API_KEY=
## GITHUB_USERNAME - Github username (Default: None)
# GITHUB_USERNAME=
################################################################################
### WEB BROWSING
################################################################################
## HEADLESS_BROWSER - Whether to run the browser in headless mode (default: True)
# HEADLESS_BROWSER=True
## USE_WEB_BROWSER - Sets the web-browser driver to use with selenium (default: chrome)
# USE_WEB_BROWSER=chrome
## BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunks to summarize (Default: 3000)
# BROWSE_CHUNK_MAX_LENGTH=3000
## BROWSE_SPACY_LANGUAGE_MODEL - spaCy language model](https://spacy.io/usage/models) to use when creating chunks. (Default: en_core_web_sm)
# BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm
## GOOGLE_API_KEY - Google API key (Default: None)
# GOOGLE_API_KEY=
## GOOGLE_CUSTOM_SEARCH_ENGINE_ID - Google custom search engine ID (Default: None)
# GOOGLE_CUSTOM_SEARCH_ENGINE_ID=
################################################################################
### TEXT TO SPEECH PROVIDER
################################################################################
## TEXT_TO_SPEECH_PROVIDER - Which Text to Speech provider to use (Default: gtts)
# TEXT_TO_SPEECH_PROVIDER=gtts
### Only if TEXT_TO_SPEECH_PROVIDER=streamelements
## STREAMELEMENTS_VOICE - Voice to use for StreamElements (Default: Brian)
# STREAMELEMENTS_VOICE=Brian
### Only if TEXT_TO_SPEECH_PROVIDER=elevenlabs
## ELEVENLABS_API_KEY - Eleven Labs API key (Default: None)
# ELEVENLABS_API_KEY=
## ELEVENLABS_VOICE_ID - Eleven Labs voice ID (Example: None)
# ELEVENLABS_VOICE_ID=
################################################################################
### CHAT MESSAGES
################################################################################
## CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False
檔案位置:docs/configuration/options.md
2023-10-12 當日擷取的內容
Configuration is controlled through the Config
object. You can set configuration variables via the .env
file. If you don't have a .env
file, create a copy of .env.template
in your Auto-GPT
folder and name it .env
.
AI_SETTINGS_FILE
: Location of the AI Settings file relative to the Auto-GPT root directory. Default: ai_settings.yamlAUDIO_TO_TEXT_PROVIDER
: Audio To Text Provider. Only option currently is huggingface
. Default: huggingfaceAUTHORISE_COMMAND_KEY
: Key response accepted when authorising commands. Default: yAZURE_CONFIG_FILE
: Location of the Azure Config file relative to the Auto-GPT root directory. Default: azure.yamlBROWSE_CHUNK_MAX_LENGTH
: When browsing website, define the length of chunks to summarize. Default: 3000BROWSE_SPACY_LANGUAGE_MODEL
: spaCy language model to use when creating chunks. Default: en_core_web_smCHAT_MESSAGES_ENABLED
: Enable chat messages. OptionalDISABLED_COMMAND_CATEGORIES
: Command categories to disable. Command categories are Python module names, e.g. autogpt.commands.execute_code. See the directory autogpt/commands
in the source for all command modules. Default: NoneELEVENLABS_API_KEY
: ElevenLabs API Key. Optional.ELEVENLABS_VOICE_ID
: ElevenLabs Voice ID. Optional.EMBEDDING_MODEL
: LLM Model to use for embedding tasks. Default: text-embedding-ada-002EXECUTE_LOCAL_COMMANDS
: If shell commands should be executed locally. Default: FalseEXIT_KEY
: Exit key accepted to exit. Default: nFAST_LLM
: LLM Model to use for most tasks. Default: gpt-3.5-turboGITHUB_API_KEY
: Github API Key. Optional.GITHUB_USERNAME
: GitHub Username. Optional.GOOGLE_API_KEY
: Google API key. Optional.GOOGLE_CUSTOM_SEARCH_ENGINE_ID
: Google custom search engine ID. Optional.HEADLESS_BROWSER
: Use a headless browser while Auto-GPT uses a web browser. Setting to False
will allow you to see Auto-GPT operate the browser. Default: TrueHUGGINGFACE_API_TOKEN
: HuggingFace API, to be used for both image generation and audio to text. Optional.HUGGINGFACE_AUDIO_TO_TEXT_MODEL
: HuggingFace audio to text model. Default: CompVis/stable-diffusion-v1-4HUGGINGFACE_IMAGE_MODEL
: HuggingFace model to use for image generation. Default: CompVis/stable-diffusion-v1-4IMAGE_PROVIDER
: Image provider. Options are dalle
, huggingface
, and sdwebui
. Default: dalleIMAGE_SIZE
: Default size of image to generate. Default: 256MEMORY_BACKEND
: Memory back-end to use. Currently json_file
is the only supported and enabled backend. Default: json_fileMEMORY_INDEX
: Value used in the Memory backend for scoping, naming, or indexing. Default: auto-gptOPENAI_API_KEY
: REQUIRED- Your OpenAI API Key.OPENAI_ORGANIZATION
: Organization ID in OpenAI. Optional.PLAIN_OUTPUT
: Plain output, which disables the spinner. Default: FalsePLUGINS_CONFIG_FILE
: Path of the Plugins Config file relative to the Auto-GPT root directory. Default: plugins_config.yamlPROMPT_SETTINGS_FILE
: Location of the Prompt Settings file relative to the Auto-GPT root directory. Default: prompt_settings.yamlREDIS_HOST
: Redis Host. Default: localhostREDIS_PASSWORD
: Redis Password. Optional. Default:REDIS_PORT
: Redis Port. Default: 6379RESTRICT_TO_WORKSPACE
: The restrict file reading and writing to the workspace directory. Default: TrueSD_WEBUI_AUTH
: Stable Diffusion Web UI username:password pair. Optional.SD_WEBUI_URL
: Stable Diffusion Web UI URL. Default: http://localhost:7860SHELL_ALLOWLIST
: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if SHELL_COMMAND_CONTROL
is set to allowlist
. Default: NoneSHELL_COMMAND_CONTROL
: Whether to use allowlist
or denylist
to determine what shell commands can be executed (Default: denylist)SHELL_DENYLIST
: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if SHELL_COMMAND_CONTROL
is set to denylist
. Default: sudo,suSMART_LLM
: LLM Model to use for "smart" tasks. Default: gpt-4STREAMELEMENTS_VOICE
: StreamElements voice to use. Default: BrianTEMPERATURE
: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperatureTEXT_TO_SPEECH_PROVIDER
: Text to Speech Provider. Options are gtts
, macos
, elevenlabs
, and streamelements
. Default: gttsUSER_AGENT
: User-Agent given when browsing websites. Default: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"USE_AZURE
: Use Azure's LLM Default: FalseUSE_WEB_BROWSER
: Which web browser to use. Options are chrome
, firefox
, safari
or edge
Default: chromeWIPE_REDIS_ON_START
: Wipes data / index on start. Default: Truepython -m autogpt --gpt4only
0.2 版本的測試
因為要求分析"優劣勢",需要花費一些時間找"各家"純網銀的網站去一一分析,所以當找到第一筆純網銀的資料時,中途手動強制結束(太久了…)。
Al Name:FinetechGPT
Al Role:可以滿足我所有需求的 Al 金融領域專家
Al Goals:幫我寫一篇介紹什麼是純網銀的文章
Al Goals:内容必須分析目前台灣純網銀的優劣勢
Al Goals:文章要脈絡清晰,架構明確,符合實際狀況
Al Goals:在完成任務後結束並關閉程式
分析改為舉例
AI Name:Finetech GPT
AI Role:可以滿足我所有需求的 AI 金融領域專家
AI Goals 1:幫我寫一篇介紹什麼是純網銀的文章
AI Goals 2:內容必須舉例目前台灣有哪些純網銀
AI Goals 3:文章脈絡清晰,架構明確,符合實際狀況
AI Goals 4:在完成任務後結束並關閉程式
結果:搜尋內容有錯誤
凱基(網路)銀行並非純網銀,可能是因為英文翻譯成 pure online bank 的關係?
WeBank 為中國騰訊控股主導的民營純網銀,並非台灣地區
Google 搜尋相關資訊
彙整資訊然後寫文章
我需要確保我不只是從網際網路上複製資訊,而是將其綜合成一篇連貫的文章。
...略
SYSTEM: Command write_to_file returned: File written to successfully.
THOUGHTS: Now that we have written the article, we need to make sure that it is well-written and informative. I suggest we use the 'read_file' command to read the article and make any necessary edits.
REASONING: We need to make sure that the article is well-written and informative before we publish it. The 'read_file' command is the best way to do this, as it allows us to read the article and make any necessary edits.
PLAN:
- Use the 'read_file' command to read the article and make any necessary edits.\n- Make sure that the article is well-written and informative.\n- Once the article is complete, we can use the 'append_to_file' command to add it to an existing file or the 'execute_python_file' command to run a Python script that will publish the article.
文章寫好了並檢查編輯
THOUGHTS: Now that we have read the article and made any necessary edits, we need to decide how we want to publish it. I suggest we use the 'send_tweet' command to publish the article on Twitter.
REASONING: Twitter is a popular social media platform that is widely used for sharing news and information. By using the 'send_tweet' command, we can quickly and easily publish the article on Twitter and reach a wide audience.
PLAN:
- Use the 'send_tweet' command to publish the article on Twitter.\n- Make sure that the tweet includes a link to the article and any relevant hashtags.\n- Once the tweet is published, we can use the 'task_complete' command to end the program.
Ok 文章完成編輯,建議發布至推特
未設定 twitter api
所以搜尋相關資訊
(手動中斷)
What are Pure Online Banks?
Pure online banks are financial institutions that operate entirely online, without any physical branches. These banks offer a range of financial products and services, including checking and savings accounts, loans, and credit cards. Because they don't have the overhead costs associated with physical branches, pure online banks are often able to offer higher interest rates and lower fees than traditional banks.
Examples of Pure Online Banks in Taiwan:
KGI Bank is a pure online bank that was launched in Taiwan in 2019. The bank offers a range of financial products and services, including checking and savings accounts, loans, and credit cards. Line Bank is another pure online bank that was launched in Taiwan in 2020. The bank is a subsidiary of Line Corporation, the company behind the popular messaging app Line. Rakuten Bank and WeBank are two other pure online banks that operate in Taiwan.
In conclusion, pure online banks are a growing trend in the financial industry. They offer a range of financial products and services, often at lower costs than traditional banks. If you're looking for a bank that offers convenience, low fees, and high interest rates, a pure online bank may be the right choice for you.
什麼是純網銀?
純網銀是完全線上運營的金融機構,沒有任何實體分支機構。 這些銀行提供一系列金融產品和服務,包括支票和儲蓄賬戶、貸款和信用卡。 由於他們沒有與實體分支機構相關的間接費用,純網銀通常能夠提供比傳統銀行更高的利率和更低的費用。
臺灣純線上銀行的例子:
KGI銀行 錯誤 ❌
連線銀行
樂天銀行
WeBank 錯誤 ❌
KGI銀行是一家純粹的線上銀行,於2019年在臺灣推出。 該銀行提供一系列金融產品和服務,包括支票和儲蓄賬戶、貸款和信用卡。 Line Bank是2020年在臺灣推出的另一家純粹的線上銀行。 該銀行是Line Corporation的子公司,Line Corporation是流行的訊息應用程式Line背後的公司。 樂天銀行和WeBank是另外兩家在臺灣運營的純線上銀行。
總之,純粹的線上銀行是金融行業日益增長的趨勢。 他們提供一系列金融產品和服務,通常成本低於傳統銀行。 如果您正在尋找一家提供便利、低費用和高利率的銀行,那麼純粹的線上銀行可能是您的正確選擇。
題外話也可以用 OpenAI GPTs 製作 Fintech ChatGPT :