An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.
据 KrebsOnSecurity 获悉,埃隆·马斯克旗下人工智能公司 xAI 的一名员工在 GitHub 上泄露了一组私钥,过去两个月间,任何人都有可能利用该密钥查询 xAI 专为处理马斯克旗下公司(包括 SpaceX、Tesla 和 Twitter/X)内部数据而定制的私有大语言模型(LLMs)。

Image: Shutterstock, @sdx15.
图片:Shutterstock,@sdx15。
Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.
安全咨询公司 Seralys 的“首席黑客官” 菲利普·卡图雷格利最先公开了这起凭证泄露事件 ——某 xAI 技术员工的 GitHub 代码库中暴露了 x.ai 应用程序接口(API)的访问密钥。
Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.
卡图雷格利在 LinkedIn 的帖子引起了 GitGuardian 研究人员的注意,该公司专门检测并修复公开及专有环境中暴露的敏感信息。GitGuardian 的系统持续扫描 GitHub 等代码库中暴露的 API 密钥,并向受影响用户发送自动警报。
GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.
GitGuardian 的 Eric Fourrier 向 KrebsOnSecurity 透露,被泄露的 API 密钥可访问 xAI 开发的 AI 聊天机器人 Grok 的多款未发布模型。GitGuardian 总计发现该密钥能访问至少 60 个经过微调的私有 LLMs。
“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”
GitGuardian 在向 xAI 解释调查结果的邮件中写道:‘该凭证可用于以用户身份访问 X.ai API。关联账户不仅能访问公开的 Grok 模型(如 grok-2-1212 等),还能访问看似未发布的(grok-2.5V)、开发中的(research-grok-2p5v-1018)以及私有模型(tweet-rejector、grok-spacex-2024-11-04)。’
Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.
Fourrier 发现 GitGuardian 早在近两个月前的 3 月 2 日就向该 xAI 员工发出过 API 密钥泄露的警报。但截至 4 月 30 日 GitGuardian 直接联系 xAI 安全团队时,该密钥仍有效可用。xAI 要求 GitGuardian 通过其 HackerOne 漏洞赏金计划上报此事,但几小时后包含该 API 密钥的代码库就从 GitHub 移除了。
“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”
“看起来其中一些内部 LLMs 是基于 SpaceX 数据微调的,另一些则使用了特斯拉数据进行微调,”富里耶表示。“我完全认为基于 SpaceX 数据微调的 Grok 模型本不该公开暴露。”
xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.
xAI 未就置评请求作出回应。其密钥遭泄露的 28 岁 xAI 技术团队成员也未予回应。
Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.
卡罗尔·温奎斯特 ,GitGuardian 首席营销官表示,让潜在敌对用户免费访问私有 LLMs 无异于埋下灾难的种子。
“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”
“如果你是攻击者,能直接访问模型及 Grok 等后端接口,这绝对能成为进一步攻击的跳板,”她说。“攻击者可利用提示注入操控(LLM)模型达成目的,或试图将代码植入供应链。”
The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.
xAI 内部 LLMs 的意外曝光正值马斯克所谓的政府效率部 (DOGE)持续将敏感政府记录输入人工智能工具之际。今年二月,《华盛顿邮报》 报道称 DOGE 官员正将教育部全域数据输入 AI 工具,以审查该机构的项目与开支情况。
The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.
《华盛顿邮报》称,DOGE 计划在多个部门和机构复制这一流程,访问政府不同部分的后端软件,然后利用 AI 技术提取和筛选有关员工及项目支出的信息。
“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.
“将敏感数据输入人工智能软件意味着这些数据将被系统 Operator 掌控,增加了数据泄露或被网络攻击席卷的风险,”《华盛顿邮报》记者写道。
Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.
《连线》杂志三月报道称,DOGE 已向总务管理局的 1500 名联邦雇员部署了名为 GSAi 的专有聊天机器人 ,这是其持续精简联邦劳动力过程中,用自动化替代原有人工任务的一项举措。
A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.
上个月的一份路透社报道称,特朗普政府官员告知部分美国政府雇员,DOGE 正在利用 AI 监视至少一个联邦机构的通讯内容,以排查对特朗普总统及其议程的敌意。路透社写道,DOGE 团队已大量部署马斯克的 Grok AI 聊天机器人作为其削减联邦政府规模工作的一部分,不过路透社表示无法确切核实 Grok 的具体使用方式。
Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.
Caturegli 表示,虽然目前没有迹象表明暴露的 x.ai API 密钥可能导致联邦政府或用户数据被访问,但这些私有模型很可能是在专有数据上训练的,可能会无意中泄露与 xAI、Twitter 或 SpaceX 内部开发工作相关的细节。
“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”
“该密钥被公开暴露两个月并授予内部模型访问权限的事实令人担忧,”Caturegli 表示。“这种长期存在的凭证暴露凸显了密钥管理薄弱和内部监控不足,引发了关于开发者访问权限保障及更广泛运营安全措施的质疑。”
Could someone explain to me how something like this would occur? What was the possible cause for someone on the team to publish a private API key? In this case is an API key needed to push to GitHub, I’m wondering how would that end up in the files apart from on purpose? I don’t know much about these subjects so maybe someone with more insight could offer an explanation. Thanks for any help.
有人能向我解释一下这种情况是如何发生的吗?团队成员发布私有 API 密钥的可能原因是什么?在这个案例中,向 GitHub 推送代码是否需要 API 密钥,我很好奇除了故意为之,密钥怎么会出现在文件中?我对这些主题了解不多,也许有更深入了解的人能提供解释。感谢任何帮助。
→
Typically this has occurred through an error on the users behalf where an API key file is copied into a repository while writing scripts or code that makes calls to the api. This was very likely not deliberate but speaks to either a lack of experience or knowledge around how to handle sensitive information. In other words a rookie mistake
通常,这种情况是由于用户在使用 API 时操作失误造成的,比如在编写调用 API 的脚本或代码时,将 API 密钥文件复制到了代码仓库中。这很可能并非有意为之,但反映出当事人在处理敏感信息方面缺乏经验或相关知识。换句话说,这是一个新手常犯的错误。
→