English | 中文 | 繁體中文 | Français | 日本語 | Português (Brasil)
A 100% local alternative to Manus AI, this voice-enabled AI assistant autonomously browses the web, writes code, and plans tasks while keeping all data on your device. Tailored for local reasoning models, it runs entirely on your hardware, ensuring complete privacy and zero cloud dependency.
一个 100%本地的 Manus AI 替代方案,这个语音驱动的 AI 助手自主浏览网络,编写代码,并规划任务,同时将所有数据保留在您的设备上。针对本地推理模型进行定制,它完全在您的硬件上运行,确保完全隐私和零云依赖。
-
🔒 Fully Local & Private - Everything runs on your machine — no cloud, no data sharing. Your files, conversations, and searches stay private.
🔒 完全本地 & 私密 - 所有操作都在您的机器上运行 — 没有云,没有数据共享。您的文件、对话和搜索都保持私密。 -
🌐 Smart Web Browsing - AgenticSeek can browse the internet by itself — search, read, extract info, fill web form — all hands-free.
🌐 智能网络浏览 - AgenticSeek 可以自行浏览互联网 — 搜索、阅读、提取信息、填写网页表单 — 全自动。 -
💻 Autonomous Coding Assistant - Need code? It can write, debug, and run programs in Python, C, Go, Java, and more — all without supervision.
💻 自主编程助手 - 需要代码?它可以编写、调试和运行 Python、C、Go、Java 等语言的程序 — 全部无需监督。 -
🧠 Smart Agent Selection - You ask, it figures out the best agent for the job automatically. Like having a team of experts ready to help.
🧠 Smart Agent Selection - 你提问,它会自动找出最适合的代理进行工作。就像拥有一支随时待命的专家团队来帮助你。 -
📋 Plans & Executes Complex Tasks - From trip planning to complex projects — it can split big tasks into steps and get things done using multiple AI agents.
📋 计划与执行复杂任务 - 从行程规划到复杂项目——它可以将大任务拆分成步骤,并使用多个 AI 代理来完成任务。 -
🎙️ Voice-Enabled - Clean, fast, futuristic voice and speech to text allowing you to talk to it like it's your personal AI from a sci-fi movie
🎙️ 语音控制 - 清晰、快速、未来的语音和语音转文本功能,让你可以像与科幻电影中的个人 AI 交谈一样与它交流。
Can you search for the agenticSeek project, learn what skills are required, then open the CV_candidates.zip and then tell me which match best the project
Can you 搜索 agenticSeek 项目,了解需要哪些技能,然后打开 CV_candidates.zip,然后告诉我哪个最符合项目要求
agentic_seek_demo.mov
Disclaimer: This demo, including all the files that appear (e.g: CV_candidates.zip), are entirely fictional. We are not a corporation, we seek open-source contributors not candidates.
免责声明:此演示,包括所有出现的文件(例如:CV_candidates.zip),都是完全虚构的。我们不是一个公司,我们寻求开源贡献者而不是候选人。
🛠
⚠️ ️ Active Work in Progress – Please note that Code/Bash is not dockerized yet but will be soon (see docker_deployement branch) - Do not deploy over network or production.
🛠 ⚠️ ️ 进行中 – 请注意,Code/Bash 目前尚未容器化,但很快就会实现(参见 docker_deployement 分支)- 请勿在网络或生产环境中部署。
🙏 This project started as a side-project with zero roadmap and zero funding. It's grown way beyond what I expected by ending in GitHub Trending. Contributions, feedback, and patience are deeply appreciated.
🙏 这个项目最初是一个副项目,没有任何路线图和资金支持。它已经远远超出了我的预期,最终出现在 GitHub 潮流中。贡献、反馈和耐心都深感感激。
Make sure you have chrome driver, docker and python3.10 installed.
请确保您已经安装了 chrome driver、docker 和 python3.10。
We highly advice you use exactly python3.10 for the setup. Dependencies error might happen otherwise.
我们强烈建议您使用 python3.10 进行设置。否则可能会出现依赖错误。
For issues related to chrome driver, see the Chromedriver section.
对于与 chrome driver 相关的问题,请参见 Chromedriver 章节。
git clone https://github.com/Fosowl/agenticSeek.git
cd agenticSeek
mv .env.example .env
python3 -m venv agentic_seek_env
source agentic_seek_env/bin/activate
# On Windows: agentic_seek_env\Scripts\activate
Ensure Python, Docker and docker compose, and Google chrome are installed.
确保安装了 Python、Docker 和 docker compose,以及 Google Chrome。
We recommand Python 3.10.0.
我们推荐使用 Python 3.10.0。
Automatic Installation (Recommanded):
自动安装(推荐):
For Linux/Macos: 对于 Linux/Macos:
./install.sh
For windows:
./install.bat
Manually:
Note: For any OS, ensure the ChromeDriver you install matches your installed Chrome version. Run google-chrome --version
. See known issues if you have chrome >135
Note: 对于任何操作系统,请确保您安装的 ChromeDriver 与已安装的 Chrome 版本匹配。奔跑 google-chrome --version
。如有 Chrome >135 的问题,请参阅已知问题
- Linux:
Update Package List: sudo apt update
更新包列表: sudo apt update
Install Dependencies: sudo apt install -y alsa-utils portaudio19-dev python3-pyaudio libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1
安装依赖: sudo apt install -y alsa-utils portaudio19-dev python3-pyaudio libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1
Install ChromeDriver matching your Chrome browser version:
sudo apt install -y chromium-chromedriver
安装与您的 Chrome 浏览器版本匹配的 ChromeDriver: sudo apt install -y chromium-chromedriver
Install requirements: pip3 install -r requirements.txt
安装需求: pip3 install -r requirements.txt
- Macos:
Update brew : brew update
Install chromedriver : brew install --cask chromedriver
Install portaudio: brew install portaudio
Upgrade pip : python3 -m pip install --upgrade pip
升级 pip : python3 -m pip install --upgrade pip
Upgrade wheel : : pip3 install --upgrade setuptools wheel
升级 wheel : : pip3 install --upgrade setuptools wheel
Install requirements: pip3 install -r requirements.txt
安装依赖: pip3 install -r requirements.txt
- Windows:
Install pyreadline3 pip install pyreadline3
Install portaudio manually (e.g., via vcpkg or prebuilt binaries) and then run: pip install pyaudio
Install portaudio 手动(例如,通过 vcpkg 或预构建的二进制文件),然后运行: pip install pyaudio
Download and install chromedriver manually from: https://sites.google.com/chromium.org/driver/getting-started
从以下网址手动下载并安装 chromedriver: https://sites.google.com/chromium.org/driver/getting-started
Place chromedriver in a directory included in your PATH.
将 chromedriver 放置在一个包含在你的 PATH 中的 目录。
Install requirements: pip3 install -r requirements.txt
安装依赖项: pip3 install -r requirements.txt
Hardware Requirements: 硬件要求:
To run LLMs locally, you'll need sufficient hardware. At a minimum, a GPU capable of running Qwen/Deepseek 14B is required. See the FAQ for detailed model/performance recommendations.
要本地运行 LLMs,你需要足够的硬件。至少需要一个可以运行 Qwen/Deepseek 14B 的 GPU。详见常见问题获取详细的模型/性能推荐。
Setup your local provider
Start your local provider, for example with ollama:
ollama serve
See below for a list of local supported provider.
Update the config.ini
Change the config.ini file to set the provider_name to a supported provider and provider_model to a LLM supported by your provider. We recommand reasoning model such as Qwen or Deepseek.
Change the config.ini 文件 to set the provider_name to a supported provider and provider_model to a LLM supported by your provider. We recommand reasoning model such as Qwen or Deepseek.
See the FAQ at the end of the README for required hardware.
See the FAQ 在 README 的末尾 for required hardware.
[MAIN]
is_local = True # Whenever you are running locally or with remote provider.
provider_name = ollama # or lm-studio, openai, etc..
provider_model = deepseek-r1:14b # choose a model that fit your hardware
provider_server_address = 127.0.0.1:11434
agent_name = Jarvis # name of your AI
recover_last_session = True # whenever to recover the previous session
save_session = True # whenever to remember the current session
speak = True # text to speech
listen = False # Speech to text, only for CLI
work_dir = /Users/mlg/Documents/workspace # The workspace for AgenticSeek.
jarvis_personality = False # Whenever to use a more "Jarvis" like personality (experimental)
languages = en zh # The list of languages, Text to speech will default to the first language on the list
[BROWSER]
headless_browser = True # Whenever to use headless browser, recommanded only if you use web interface.
stealth_mode = True # Use undetected selenium to reduce browser detection
Warning: Do NOT set provider_name to openai
if using LM-studio for running LLMs. Set it to lm-studio
.
警告: 如果使用 LM-studio 运行 LLMs,请勿将 provider_name 设置为 openai
。将其设置为 lm-studio
。
Note: Some provider (eg: lm-studio) require you to have http://
in front of the IP. For example http://127.0.0.1:1234
注意: 一些提供商(例如: lm-studio)要求您在 IP 前面加上 http://
。例如 http://127.0.0.1:1234
List of local providers 本地提供者列表
Provider 提供者 | Local? 本地? | Description 描述 |
---|---|---|
ollama | Yes | Run LLMs locally with ease using ollama as a LLM provider 使用 ollama 作为 LLM 提供商轻松本地运行 LLMs |
lm-studio | Yes | Run LLM locally with LM studio (set provider_name to lm-studio )在本地运行 LLM 与 LM 录音棚(将 provider_name 设置为 lm-studio ) |
openai | Yes | Use openai compatible API (eg: llama.cpp server) 使用 openai 兼容的应用程序接口(例如:llama.cpp 服务器) |
Next step: Start services and run AgenticSeek
下一步:启动服务并运行 AgenticSeek
See the Known issues section if you are having issues
如果遇到问题,请参见已知问题章节
See the Run with an API section if your hardware can't run deepseek locally
如果您的硬件无法本地运行 deepseek,请参见使用应用程序接口运行章节
See the Config section for detailled config file explanation.
See the 配置章节 for detailled 配置文件 explanation.
Set the desired provider in the config.ini
. See below for a list of API providers.
Set the desired provider in the config.ini
. See below for a list of 应用程序接口 providers.
[MAIN]
is_local = False
provider_name = google
provider_model = gemini-2.0-flash
provider_server_address = 127.0.0.1:5000 # doesn't matter
Warning: Make sure there is not trailing space in the config.
警告: 确保配置中没有多余的空格。
Export your API key: export <<PROVIDER>>_API_KEY="xxx"
导出 API 调性: export <<PROVIDER>>_API_KEY="xxx"
Example: export TOGETHER_API_KEY="xxxxx"
示例: export TOGETHER_API_KEY="xxxxx"
List of API providers API 提供商列表
Provider | Local? | Description |
---|---|---|
openai | Depends | Use ChatGPT API 使用 ChatGPT 应用程序接口 |
deepseek | No | Deepseek API (non-private) Deepseek 应用程序接口(非私有) |
huggingface | No 否 | Hugging-Face API (non-private) Hugging-Face 应用程序接口(非私有) |
togetherAI | No | Use together AI API (non-private) 使用 together AI 应用程序接口(非私有) |
No | Use google gemini API (non-private) 使用 google gemini API(非私有) |
We advice against using gpt-4o or other closedAI models, performance are poor for web browsing and task planning.
我们不建议使用 gpt-4o 或其他 closedAI 模型,这些模型在网页浏览和任务规划方面的性能较差。
Please also note that coding/bash might fail with gemini, it seem to ignore our prompt for format to respect, which are optimized for deepseek r1.
请同时注意,coding/bash 在使用 gemini 时可能会失败,它似乎忽略了我们关于格式的提示,这些提示是针对 deepseek r1 进行优化的。
Next step: Start services and run AgenticSeek
Next step: 启动服务并奔跑 AgenticSeek
See the Known issues section if you are having issues
如果遇到问题,请参见已知问题章节
See the Config section for detailled config file explanation.
请参见配置章节以获取详细的配置文件说明。
Activate your python env if needed.
如果需要,请激活你的 Python 环境。
source agentic_seek_env/bin/activate
Start required services. This will start all services from the docker-compose.yml, including:
- searxng
- redis (required by searxng)
- frontend
启动所需的服务。这将从 docker-compose.yml 启动所有服务,包括:- searxng - redis(searxng 所需) - 前端
sudo ./start_services.sh # MacOS
start ./start_services.cmd # Window
Options 1: Run with the CLI interface.
选项 1:使用 CLI 界面运行。
python3 cli.py
We advice you set headless_browser
to False in the config.ini for CLI mode.
我们建议你在 config.ini 中将 headless_browser
设置为 False 以使用 CLI 模式。
Options 2: Run with the Web interface.
Options 2: 奔跑与 Web 界面.
Start the backend. 启动后端.
python3 api.py
Go to http://localhost:3000/
and you should see the web interface.
去 http://localhost:3000/
并你应该能看到 Web 界面.
Make sure the services are up and running with ./start_services.sh
and run the AgenticSeek with python3 cli.py
for CLI mode or python3 api.py
then go to localhost:3000
for web interface.
确保服务正在运行并使用 ./start_services.sh
运行 AgenticSeek 以 CLI 模式 或者 python3 cli.py
然后去 python3 api.py
以 Web 界面.
You can also use speech to text by setting listen = True
in the config. Only for CLI mode.
您也可以通过在配置中设置 listen = True
来使用语音转文本。仅适用于 CLI 模式。
To exit, simply say/type goodbye
.
要退出,请说/输入 goodbye
。
Here are some example usage:
以下是一些示例用法:
Make a snake game in python!
用 Python 制作一个蛇游戏!
Search the web for top cafes in Rennes, France, and save a list of three with their addresses in rennes_cafes.txt.
搜索网络以找到法国雷恩市的顶级咖啡馆,并保存一个包含三个咖啡馆及其地址的列表在 rennes_cafes.txt 中。
Write a Go program to calculate the factorial of a number, save it as factorial.go in your workspace
编写一个 Go 程序来计算一个数的阶乘,将其保存为 factorial.go 在你的工作区
Search my summer_pictures folder for all JPG files, rename them with today’s date, and save a list of renamed files in photos_list.txt
搜索 summer_pictures 文件夹中的所有 JPG 文件,用今天的日期重命名它们,并将重命名的文件列表保存在 photos_list.txt 中
Search online for popular sci-fi movies from 2024 and pick three to watch tonight. Save the list in movie_night.txt.
在线搜索 2024 年的热门科幻电影,并挑选三部今晚观看。将列表保存在 movie_night.txt 中。
Search the web for the latest AI news articles from 2025, select three, and write a Python script to scrape their titles and summaries. Save the script as news_scraper.py and the summaries in ai_news.txt in /home/projects
搜索网络以查找 2025 年的最新 AI 新闻文章,选择三篇,并编写一个 Python 脚本来抓取它们的标题和摘要。将脚本保存为 news_scraper.py,并将摘要保存在/home/projects/ai_news.txt 中
Friday, search the web for a free stock price API, register with supersuper7434567@gmail.com then write a Python script to fetch using the API daily prices for Tesla, and save the results in stock_prices.csv
周五,搜索网络以查找一个免费的股票价格 API,使用 supersuper7434567@gmail.com 注册,然后编写一个 Python 脚本来使用 API 获取特斯拉的每日价格,并将结果保存在 stock_prices.csv 中
Note that form filling capabilities are still experimental and might fail.
请注意,表单填写功能仍处于实验音乐阶段,可能会不及格。
After you type your query, AgenticSeek will allocate the best agent for the task.
在您输入查询后,AgenticSeek 将分配最适合该任务的代理。
Because this is an early prototype, the agent routing system might not always allocate the right agent based on your query.
因为这是一个早期原型,代理路由系统可能不会总是根据你的查询分配正确的代理。
Therefore, you should be very explicit in what you want and how the AI might proceed for example if you want it to conduct a web search, do not say:
因此,你应该非常明确你想要什么以及 AI 应该如何进行,例如如果你想让它进行网络搜索,不要说:
Do you know some good countries for solo-travel?
Instead, ask: 相反,问:
Do a web search and find out which are the best country for solo-travel
If you have a powerful computer or a server that you can use, but you want to use it from your laptop you have the options to run the LLM on a remote server using our custom llm server.
如果你有一台强大的电脑或服务器可以使用,但你想要从你的笔记本电脑上使用它,你可以选择在我们的自定义 llm 服务器上使用远程服务器运行 LLM。
On your "server" that will run the AI model, get the ip address
在你的“服务器”上运行 AI 模型时,获取 ip 地址
ip a | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1 # local ip
curl https://ipinfo.io/ip # public ip
Note: For Windows or macOS, use ipconfig or ifconfig respectively to find the IP address.
注意:对于 Windows 或 macOS,请分别使用 ipconfig 或 ifconfig 查找 IP 地址。
Clone the repository and enter the server/
folder.
克隆代码仓库并进入 server/
文件夹。
git clone --depth 1 https://github.com/Fosowl/agenticSeek.git
cd agenticSeek/llm_server/
Install server specific requirements:
安装服务器特定要求:
pip3 install -r requirements.txt
Run the server script.
运行服务器脚本。
python3 app.py --provider ollama --port 3333
You have the choice between using ollama
and llamacpp
as a LLM service.
您可以选择使用 ollama
和 llamacpp
作为 LLM 服务。
Now on your personal computer:
现在在您的个人电脑上:
Change the config.ini
file to set the provider_name
to server
and provider_model
to deepseek-r1:xxb
.
Set the provider_server_address
to the ip address of the machine that will run the model.
Change the config.ini
文件 to set the provider_name
to server
and provider_model
to deepseek-r1:xxb
. Set the provider_server_address
to the ip 地址 of the machine that will 运行 the 模特.
[MAIN]
is_local = False
provider_name = server
provider_model = deepseek-r1:70b
provider_server_address = x.x.x.x:3333
Next step: Start services and run AgenticSeek
下一步: 启动服务并运行 AgenticSeek
Please note that currently speech to text only work in english.
请注意,目前语音转文本只支持英文.
The speech-to-text functionality is disabled by default. To enable it, set the listen option to True in the config.ini file:
The speech-to-text 功能默认是禁用的。要启用它,请在 config.ini 文件中将 listen 选项设置为 True:
listen = True
When enabled, the speech-to-text feature listens for a trigger keyword, which is the agent's name, before it begins processing your input. You can customize the agent's name by updating the agent_name
value in the config.ini file:
启用后,语音转文字功能会在开始处理您的输入之前监听触发关键词,该关键词是代理人的名字。您可以通过更新 config.ini 文件中的 agent_name
值来自定义代理人的名字:
agent_name = Friday
For optimal recognition, we recommend using a common English name like "John" or "Emma" as the agent name
为了获得最佳识别效果,我们建议使用常见的英文名字,如“John”或“Emma”作为代理人的名字
Once you see the transcript start to appear, say the agent's name aloud to wake it up (e.g., "Friday").
一旦您看到成绩单开始出现,请大声说出代理人的名字来唤醒它(例如,“Friday”)。
Speak your query clearly.
查询您的需求。
End your request with a confirmation phrase to signal the system to proceed. Examples of confirmation phrases include:
在请求结束时使用确认短语以指示系统继续处理。确认短语示例包括:
"do it", "go ahead", "execute", "run", "start", "thanks", "would ya", "please", "okay?", "proceed", "continue", "go on", "do that", "go it", "do you understand?"
Example config: 示例配置:
[MAIN]
is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:11434
agent_name = Friday
recover_last_session = False
save_session = False
speak = False
listen = False
work_dir = /Users/mlg/Documents/ai_folder
jarvis_personality = False
languages = en zh
[BROWSER]
headless_browser = False
stealth_mode = False
Explanation:
-
is_local -> Runs the agent locally (True) or on a remote server (False).
is_local -> 在本地运行代理(True)或在远程服务器上运行(False)。 -
provider_name -> The provider to use (one of:
ollama
,server
,lm-studio
,deepseek-api
)
provider_name -> 使用的提供者(可选:ollama
,server
,lm-studio
,deepseek-api
) -
provider_model -> The model used, e.g., deepseek-r1:32b.
provider_model -> 使用的模型,例如,deepseek-r1:32b。 -
provider_server_address -> Server address, e.g., 127.0.0.1:11434 for local. Set to anything for non-local API.
provider_server_address -> 服务器地址, 例如, 127.0.0.1:11434 用于本地。非本地 API 时可以设置为其他地址。 -
agent_name -> Name of the agent, e.g., Friday. Used as a trigger word for TTS.
agent_name -> 代理名称, 例如, Friday。用作 TTS 触发词。 -
recover_last_session -> Restarts from last session (True) or not (False).
recover_last_session -> 从上次会话重启 (True) 或不从上次会话重启 (False)。 -
save_session -> Saves session data (True) or not (False).
save_session -> 保存会话数据 (True) 或不保存会话数据 (False)。 -
speak -> Enables voice output (True) or not (False).
speak -> 启用语音输出(True)或不启用(False)。 -
listen -> listen to voice input (True) or not (False).
listen -> 听取语音输入(True)或不听取(False)。 -
work_dir -> Folder the AI will have access to. eg: /Users/user/Documents/.
work_dir -> AI 将访问的文件夹。例如:/Users/user/Documents/。 -
jarvis_personality -> Uses a JARVIS-like personality (True) or not (False). This simply change the prompt file.
jarvis_personality -> 使用类似 JARVIS 的性格(True)或不使用(False)。这只是更改提示文件。 -
languages -> The list of supported language, needed for the llm router to work properly, avoid putting too many or too similar languages.
languages -> 支持的语言列表,用于 LLM 路由器正常工作,避免添加太多或太相似的语言。 -
headless_browser -> Runs browser without a visible window (True) or not (False).
headless_browser -> 以无窗口模式(True)运行浏览器或以有窗口模式(False)运行。 -
stealth_mode -> Make bot detector time harder. Only downside is you have to manually install the anticaptcha extension.
stealth_mode -> 使机器人检测更困难。唯一的缺点是你需要手动安装 anticaptcha 扩展。 -
languages -> List of supported languages. Required for agent routing system. The longer the languages list the more model will be downloaded.
languages -> 支持的语言列表。用于代理路由系统。语言列表越长,下载的模型就越多。
The table below show the available providers:
下表显示可用的提供商:
Provider 提供商 | Local? 本地? | Description |
---|---|---|
ollama | Yes | Run LLMs locally with ease using ollama as a LLM provider 使用 ollama 作为 LLM 提供商轻松在本地运行 LLMs |
server 服务器 | Yes 是 | Host the model on another machine, run your local machine 将模特托管到另一台机器上,本地机器运行 |
lm-studio | Yes 是 | Run LLM locally with LM studio (lm-studio )在本地使用 LLM 与 LM 摄影棚 ( lm-studio ) |
openai | Depends 取决于 | Use ChatGPT API (non-private) or openai compatible API 使用 ChatGPT 应用程序接口(非私有)或兼容 openai 的应用程序接口 |
deepseek-api | No 否 | Deepseek API (non-private) Deepseek 应用程序接口(非私有) |
huggingface | No | Hugging-Face API (non-private) Hugging-Face API (非私有) |
togetherAI | No | Use together AI API (non-private) Use together AI 应用程序接口 (非私有) |
No | Use google gemini API (non-private) 使用 Google Gemini 应用程序接口 (非私有) |
To select a provider change the config.ini:
要选择提供者,请更改 config.ini:
is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:5000
is_local
: should be True for any locally running LLM, otherwise False.
is_local
: 应为 True 以表示本地运行的 LLM,否则为 False。
provider_name
: Select the provider to use by it's name, see the provider list above.
provider_name
: 通过其名称选择要使用的提供者,请参阅上方的提供者列表。
provider_model
: Set the model to use by the agent.
provider_model
: 片场 the model to use by the agent.
provider_server_address
: can be set to anything if you are not using the server provider.
provider_server_address
: 可以设置为任何内容,如果你不使用服务器提供商.
Known error #1: chromedriver mismatch
已知错误 #1: chromedriver 版本不匹配
Exception: Failed to initialize browser: Message: session not created: This version of ChromeDriver only supports Chrome version 113 Current browser version is 134.0.6998.89 with binary path
This happen if there is a mismatch between your browser and chromedriver version.
This happen if there is a mismatch between your browser and chromedriver 版本。
You need to navigate to download the latest version:
You need to navigate to download the latest 版本:
https://developer.chrome.com/docs/chromedriver/downloads
If you're using Chrome version 115 or newer go to:
If you're using Chrome 版本 115 或 newer go to:
https://googlechromelabs.github.io/chrome-for-testing/
And download the chromedriver version matching your OS.
And download the chromedriver 版本 matching your OS.
If this section is incomplete please raise an issue.
如果本章节未完成,请提出问题。
Exception: Provider lm-studio failed: HTTP request failed: No connection adapters were found for '127.0.0.1:11434/v1/chat/completions'
Make sure you have http://
in front of the provider IP address :
确保在提供者 IP 地址前有 http://
:
provider_server_address = http://127.0.0.1:11434
raise ValueError("SearxNG base URL must be provided either as an argument or via the SEARXNG_BASE_URL environment variable.")
ValueError: SearxNG base URL must be provided either as an argument or via the SEARXNG_BASE_URL environment variable.
Maybe you didn't move .env.example
as .env
? You can also export SEARXNG_BASE_URL:
也许你没有移动 .env.example
为 .env
? 你也可以导出 SEARXNG_BASE_URL:
export SEARXNG_BASE_URL="http://127.0.0.1:8080"
Q: What hardware do I need?
Q: 我需要什么硬件?
Model Size 模型 大小 | GPU | Comment 评论 |
---|---|---|
7B | 8GB Vram | ⚠️ 不推荐。性能差,频繁幻觉,规划代理很可能不及格。 |
14B | 12 GB VRAM (e.g. RTX 3060) | ✅ Usable for simple tasks. May struggle with web browsing and planning tasks. ✅ 适用于简单任务。可能在网页浏览和规划任务时会遇到困难。 |
32B | 24+ GB VRAM (e.g. RTX 4090) 24+ GB VRAM(例如:RTX 4090) |
🚀 Success with most tasks, might still struggle with task planning 🚀 大多数任务都能成功,仍然可能在任务规划上遇到困难 |
70B+ | 48+ GB Vram (eg. mac studio) | 💪 Excellent. Recommended for advanced use cases. 💪 很棒。推荐用于高级应用场景。 |
Q: Why Deepseek R1 over other models?
Q: 为什么选择 Deepseek R1 而不是其他型号?
Deepseek R1 excels at reasoning and tool use for its size. We think it’s a solid fit for our needs other models work fine, but Deepseek is our primary pick.
Deepseek R1 在其大小下擅长推理和工具使用。我们认为它是我们需求的纯色选择。其他型号也很好用,但 Deepseek 是我们的首选。
Q: I get an error running cli.py
. What do I do?
Q: 我在运行 cli.py
时遇到了错误。我该怎么办?
Ensure local is running (ollama serve
), your config.ini
matches your provider, and dependencies are installed. If none work feel free to raise an issue.
确保本地正在运行 ( ollama serve
), 你的 config.ini
与你的提供商匹配,并且已安装依赖项。如果以上方法都不奏效,请随时提出问题。
Q: Can it really run 100% locally?
Q: 它真的可以完全本地运行吗?
Yes with Ollama, lm-studio or server providers, all speech to text, LLM and text to speech model run locally. Non-local options (OpenAI or others API) are optional.
是的,使用 Ollama、lm-studio 或服务器提供商时,所有语音转文本、LLM 和文本转语音模型都可以本地运行。非本地选项(如 OpenAI 或其他 API)是可选的。
Q: Why should I use AgenticSeek when I have Manus?
Q: 为什么我应该使用 AgenticSeek 而不是 Manus?
This started as Side-Project we did out of interest about AI agents. What’s special about it is that we want to use local model and avoid APIs.
这最初是一个兴趣项目,我们对 AI 代理感兴趣。它的特色是想要使用本地模型并避免使用应用程序接口。
We draw inspiration from Jarvis and Friday (Iron man movies) to make it "cool" but for functionnality we take more inspiration from Manus, because that's what people want in the first place: a local manus alternative.
我们从贾维斯和弗里迪(铁人电影)中汲取灵感,使其变得“酷”,但在功能方面,我们更多地借鉴了 Manus,因为人们首先想要的是一个本地的 Manus 替代品。
Unlike Manus, AgenticSeek prioritizes independence from external systems, giving you more control, privacy and avoid api cost.
与 Manus 不同,AgenticSeek 优先考虑与外部系统的独立性,给你更多的控制权、隐私和避免 API 费用。
We’re looking for developers to improve AgenticSeek! Check out open issues or discussion.
我们正在寻找开发者来改进 AgenticSeek!请查看待解决问题或讨论。
Fosowl | Paris Time
Fosowl | 巴黎时间
antoineVIVIES | Taipei Time
antoineVIVIES | 台北时间
steveh8758 | Taipei Time
steveh8758 | 台北时间