这是用户在 2024-6-26 19:31 为 https://github.com/Mintplex-Labs/anything-llm 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Skip to content
Mintplex-Labs  /   anything-llm  /  
Owner avatar anything-llm Public
  • Sponsor
  • Watch 135

    Notifications

    Get push notifications on iOS or Android.
  • Lists

    Loading

    Lists

    Loading

Notifications

Notification settings

Sponsor Mintplex-Labs/anything-llm
赞助者 Mintplex-Labs/anything-llm

The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
集桌面版和 Docker 容器化的全方位 AI 应用,具备完整的 RAG 和 AI 代理功能。

License

Open in github.dev Open in a new github.dev tab Open in codespace

Mintplex-Labs/anything-llm

t

Add file

Add file

Folders and files

NameName
Last commit message
Last commit date
Jun 7, 2024
Jun 20, 2024
Jun 22, 2024
May 23, 2024
Jun 26, 2024
Jun 25, 2024
Jun 7, 2024
Jun 26, 2024
Jan 5, 2024
Jun 7, 2024
Jun 26, 2024
Apr 17, 2024
Jan 9, 2024
Jun 18, 2023
Jan 9, 2024
Mar 7, 2024
Jun 9, 2023
Jun 7, 2024
Jun 7, 2024
May 11, 2024
Jun 4, 2023
Jun 13, 2024
Sep 9, 2023
Jan 9, 2024
Jun 20, 2024
Apr 20, 2024

Repository files navigation

AnythingLLM logo

Mintplex-Labs%2Fanything-llm | Trendshift

AnythingLLM: The all-in-one AI app you were looking for.
AnythingLLM:您一直在寻找的全能人工智能应用。

Chat with your docs, use AI Agents, hyper-configurable, multi-user, & no frustrating set up required.
与文档聊天,使用 AI 代理,高度可配置,多用户,无需繁琐的设置。

Discord | License | Docs | Hosted Instance
Discord | License | 文档 | 托管实例

English · 简体中文 · 日本語

👉 AnythingLLM for desktop (Mac, Windows, & Linux)! Download Now
👉 适用于桌面版的 AnythingLLM(Mac、Windows 和 Linux)!立即下载

A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting.
一个全栈应用程序,可将任何文档、资源或内容转化为任何LLM在聊天时可用的参考上下文。

This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
此应用程序允许您选择要使用的特定LLM或矢量数据库,并支持多用户管理和权限设置。

Chatting

Watch the demo!

Watch the video

Product Overview 产品概述

AnythingLLM is a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions to build a private ChatGPT with no compromises that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it.
AnythingLLM 是一个全栈应用,您可以使用商用现成的LLMs或流行的开源LLMs和 vectorDB 解决方案来构建一个无妥协的私有 ChatGPT,它既可以在本地运行,也可以远程托管,并能智能地与您提供的任何文档进行聊天。

AnythingLLM divides your documents into objects called workspaces. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean.
AnythingLLM 将您的文档划分为名为 workspaces 的对象。工作区功能类似于线程,但增加了文档的容器化。工作区可以共享文档,但它们之间不进行交互,这样您可以为每个工作区保持独立的上下文。

Some cool features of AnythingLLM
AnythingLLM 的一些酷炫功能:

  • Multi-user instance support and permissioning
    多用户实例支持和权限管理
  • Agents inside your workspace (browse the web, run code, etc)
    工作区内的代理(浏览网页、运行代码等)
  • Custom Embeddable Chat widget for your website
    自定义可嵌入网站的聊天小部件
  • Multiple document type support (PDF, TXT, DOCX, etc)
    支持多种文档类型(PDF、TXT、DOCX 等)
  • Manage documents in your vector database from a simple UI
    从简单的用户界面管理向量数据库中的文档
  • Two chat modes conversation and query. Conversation retains previous questions and amendments. Query is simple QA against your documents
    两种聊天模式 conversationquery 。对话保留先前的问题和修改。查询是对您文档的简单 QA。
  • In-chat citations 聊天内引用
  • 100% Cloud deployment ready.
    100% 云部署就绪。
  • "Bring your own LLM" model.
    "携带您自己的LLM模型。"
  • Extremely efficient cost-saving measures for managing very large documents. You'll never pay to embed a massive document or transcript more than once. 90% more cost effective than other document chatbot solutions.
    极高效的成本节省措施,用于管理非常大的文档。你将永远不会为嵌入大量文档或成绩单支付两次费用。比其他文档聊天机器人解决方案成本效率高 90%。
  • Full Developer API for custom integrations!
    自定义集成的完整开发者 API!

Supported LLMs, Embedder Models, Speech models, and Vector Databases
支持LLMs,嵌入模型,语音模型和向量数据库

Language Learning Models:
语言学习模型:

Embedder models: 嵌入模型:

Audio Transcription models:
音频转录模型:

TTS (text-to-speech) support:
TTS(文本转语音)支持:

STT (speech-to-text) support:
STT(语音转文字)支持:

  • Native Browser Built-in (default)
    原生浏览器内置(默认)

Vector Databases: 向量数据库:

Technical Overview 技术概览

This monorepo consists of three main sections:
这个单源仓库主要分为三个部分:

  • frontend: A viteJS + React frontend that you can run to easily create and manage all your content the LLM can use.
    frontend :一个使用 viteJS 和 React 的前端,可以轻松创建和管理LLM的所有内容。
  • server: A NodeJS express server to handle all the interactions and do all the vectorDB management and LLM interactions.
    server :一个 NodeJS express 服务器,用于处理所有交互和执行所有向量数据库管理及LLM交互。
  • collector: NodeJS express server that process and parses documents from the UI.
    collector :使用 NodeJS express 服务器处理和解析 UI 的文档。
  • docker: Docker instructions and build process + information for building from source.
    docker :Docker 指令和构建过程,以及从源代码构建的相关信息。
  • embed: Code specifically for generation of the embed widget.
    embed :专门为生成嵌入式小部件编写的代码。

🛳 Self Hosting 舟楫自备

Mintplex Labs & the community maintain a number of deployment methods, scripts, and templates that you can use to run AnythingLLM locally. Refer to the table below to read how to deploy on your preferred environment or to automatically deploy.
Mintplex Labs 和社区维护了许多部署方法、脚本和模板,可用于本地运行 AnythingLLM。请参考下表了解如何在您首选的环境中部署,或自动部署。

Docker AWS GCP Digital Ocean 数字海洋 Render.com
Deploy on Docker Deploy on AWS Deploy on GCP Deploy on DigitalOcean Deploy on Render.com
Railway 铁路 RepoCloud
Deploy on Railway Deploy on RepoCloud

or set up a production AnythingLLM instance without Docker →
或在没有 Docker 的情况下设置生产 AnythingLLM 实例→

How to setup for development
开发环境配置指南

  • yarn setup To fill in the required .env files you'll need in each of the application sections (from root of repo).
    yarn setup 您需要在每个应用程序部分(从存储库根目录开始)填写必需的 .env 文件。
    • Go fill those out before proceeding. Ensure server/.env.development is filled or else things won't work right.
      在继续之前,请填写那些信息。确保 server/.env.development 已填写,否则事情将无法正常进行。
  • yarn dev:server To boot the server locally (from root of repo).
    在本地(从仓库根目录)启动服务器。
  • yarn dev:frontend To boot the frontend locally (from root of repo).
    在仓库根目录下本地启动前端( yarn dev:frontend
  • yarn dev:collector To then run the document collector (from root of repo).
    yarn dev:collector 然后运行文档收集器(从仓库根目录)。

Learn about documents 了解文档

Learn about vector caching
了解向量缓存

Contributing 贡献指南

  • create issue 创建问题
  • create PR with branch name format of <issue number>-<short name>
    创建带有 <issue number>-<short name> 格式分支名称的 PR
  • yee haw let's merge
    哎哟,让我们合并吧!

Telemetry & Privacy 遥测与隐私

AnythingLLM by Mintplex Labs Inc contains a telemetry feature that collects anonymous usage information.
AnythingLLM 由 Mintplex Labs Inc 公司包含一项 telemetry 功能,用于收集匿名使用信息。

More about Telemetry & Privacy for AnythingLLM

Why?

We use this information to help us understand how AnythingLLM is used, to help us prioritize work on new features and bug fixes, and to help us improve AnythingLLM's performance and stability.

Opting out

Set DISABLE_TELEMETRY in your server or docker .env settings to "true" to opt out of telemetry. You can also do this in-app by going to the sidebar > Privacy and disabling telemetry.

What do you explicitly track?

We will only track usage details that help us make product and roadmap decisions, specifically:

  • Typ of your installation (Docker or Desktop)
  • When a document is added or removed. No information about the document. Just that the event occurred. This gives us an idea of use.
  • Type of vector database in use. Let's us know which vector database provider is the most used to prioritize changes when updates arrive for that provider.
  • Type of LLM in use. Let's us know the most popular choice and prioritize changes when updates arrive for that provider.
  • Chat is sent. This is the most regular "event" and gives us an idea of the daily-activity of this project across all installations. Again, only the event is sent - we have no information on the nature or content of the chat itself.

You can verify these claims by finding all locations Telemetry.sendTelemetry is called. Additionally these events are written to the output log so you can also see the specific data which was sent - if enabled. No IP or other identifying information is collected. The Telemetry provider is PostHog - an open-source telemetry collection service.

View all telemetry events in source code

🔗 More Products 🔗 更多产品

  • VectorAdmin: An all-in-one GUI & tool-suite for managing vector databases.
    VectorAdmin:一个全面的 GUI 与工具套件,用于管理向量数据库。
  • OpenAI Assistant Swarm: Turn your entire library of OpenAI assistants into one single army commanded from a single agent.
    OpenAI 助理群:将您的整个 OpenAI 助理库整合成一支由单一代理指挥的单一军队。


Copyright © 2024 Mintplex Labs.
版权所有 © 2024 薄荷 plex 实验室。

This project is MIT licensed.
该项目使用 MIT 许可证。