这是用户在 2024-6-6 13:38 为 https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
See also: 另请参见
“What Is ChatGPT Doing … and Why Does It Work?” »
"ChatGPT 在做什么......为什么会成功?"

ChatGPT Gets Its “Wolfram Superpowers”!
ChatGPT 获得 "Wolfram 超能力"!

This is part of an ongoing series about our LLM-related technology:
这是LLM 相关技术系列的一部分:
ChatGPT Gets Its “Wolfram Superpowers”!
ChatGPT 获得 "Wolfram 超能力"!
Instant Plugins for ChatGPT: Introducing the Wolfram ChatGPT Plugin Kit
ChatGPT 即时插件:Wolfram ChatGPT 插件套件介绍
The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language
LLM 函数的新世界:将LLM 技术融入 Wolfram 语言
Prompts for Work & Play: Launching the Wolfram Prompt Repository
工作与娱乐的提示:启动 Wolfram 提示库
Introducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm
介绍聊天笔记本:将LLMs 整合到笔记本范例中

ChatGPT Gets Its “Wolfram Superpowers”!

To enable the functionality described here, select and install the Wolfram plugin from within ChatGPT.
要启用此处描述的功能,请在 ChatGPT 中选择并安装 Wolfram 插件。

Note that this capability is so far available only to some ChatGPT Plus users; for more information, see OpenAI’s announcement.
请注意,到目前为止,只有部分 ChatGPT Plus 用户可以使用这项功能;如欲了解更多信息,请参阅 OpenAI 的公告。

In Just Two and a Half Months…

Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”. It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be.
今年一月初,我曾写过一篇关于将 ChatGPT 与 Wolfram|Alpha 连接起来的可能性的文章。今天--仅仅两个半月之后--我很高兴地宣布它已经实现了!多亏了我们团队和 OpenAI 的英勇软件工程,ChatGPT 现在可以调用 Wolfram|Alpha--以及 Wolfram 语言--赋予它我们所认为的 "计算超能力"。虽然这一切还为时尚早,但它已经给人留下了深刻的印象--我们可以开始看到,我们所称的 "ChatGPT + Wolfram "可以有多么强大(甚至可能是革命性的)。

Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data, etc. But when it’s connected to the Wolfram plugin it can do these things. So here’s my (very simple) first example from January, but now done by ChatGPT with “Wolfram superpowers” installed:
早在一月份,我就提出过这样的观点:作为一个LLM 神经网络,ChatGPT 在文本生成 "类似 "它从网上读取的材料等方面有着非凡的能力,但不能指望它本身能进行实际的非繁琐计算,或系统地生成正确(而不仅仅是 "看起来大致正确")的数据等。但当它连接到 Wolfram 插件时,就能完成这些工作。下面是我从一月份开始做的(非常简单的)第一个例子,不过现在是由安装了 "Wolfram 超能力 "的 ChatGPT 完成的:

How far is it from Tokyo to Chicago?

It’s a correct result (which in January it wasn’t)—found by actual computation. And here’s a bonus: immediate visualization:

Show the path

How did this work? Under the hood, ChatGPT is formulating a query for Wolfram|Alpha—then sending it to Wolfram|Alpha for computation, and then “deciding what to say” based on reading the results it got back. You can see this back and forth by clicking the “Used Wolfram” box (and by looking at this you can check that ChatGPT didn’t “make anything up”):
它是如何工作的?在引擎盖下,ChatGPT 向 Wolfram|Alpha 提出一个查询,然后将其发送给 Wolfram|Alpha 进行计算,再根据读取返回的结果 "决定说什么"。您可以通过点击 "Used Wolfram"(使用 Wolfram)方框来查看这个来回过程(通过查看这个方框,您可以检查 ChatGPT 是否 "胡编乱造"):

Used Wolfram

There are lots of nontrivial things going on here, on both the ChatGPT and Wolfram|Alpha sides. But the upshot is a good, correct result, knitted into a nice, flowing piece of text.
无论是 ChatGPT 还是 Wolfram|Alpha,这里都有很多非同小可的地方。但最终的结果是一个很好的、正确的结果,编织成一段漂亮流畅的文字。

Let’s try another example, also from what I wrote in January:

What is the integral?

A fine result, worthy of our technology. And again, we can get a bonus:

Plot that

In January, I noted that ChatGPT ended up just “making up” plausible (but wrong) data when given this prompt:
今年 1 月,我曾指出,ChatGPT 在得到这一提示时,最终只是 "编造 "了一些似是而非(但却是错误的)数据:

Tell me about livestock populations

But now it calls the Wolfram plugin and gets a good, authoritative answer. And, as a bonus, we can also make a visualization:
但现在,它可以调用 Wolfram 插件,并得到一个权威的答案。另外,我们还可以制作可视化图表:

Make a bar chart

Another example from back in January that now comes out correctly is:
1 月份时的另一个例子现在也正确地显示出来了:

What planetary moons are larger than Mercury?

If you actually try these examples, don’t be surprised if they work differently (sometimes better, sometimes worse) from what I’m showing here. Since ChatGPT uses randomness in generating its responses, different things can happen even when you ask it the exact same question (even in a fresh session). It feels “very human”. But different from the solid “right-answer-and-it-doesn’t-change-if-you-ask-it-again” experience that one gets in Wolfram|Alpha and Wolfram Language.
如果您实际尝试了这些示例,如果它们的效果与我在这里展示的不同(有时更好,有时更差),请不要感到惊讶。由于 ChatGPT 在生成回复时使用了随机性,因此即使你向它提出完全相同的问题(即使是在一个新的会话中),也会出现不同的情况。这让人感觉 "非常人性化"。但这与在 Wolfram|Alpha 和 Wolfram Language 中获得的 "正确答案--即使再次询问也不会改变 "的真实体验不同。

Here’s an example where we saw ChatGPT (rather impressively) “having a conversation” with the Wolfram plugin, after at first finding out that it got the “wrong Mercury”:
下面是一个例子,我们看到 ChatGPT(相当令人印象深刻)与 Wolfram 插件进行了 "对话",起初它发现自己得到了 "错误的 Mercury":

How big is Mercury?

One particularly significant thing here is that ChatGPT isn’t just using us to do a “dead-end” operation like show the content of a webpage. Rather, we’re acting much more like a true “brain implant” for ChatGPT—where it asks us things whenever it needs to, and we give responses that it can weave back into whatever it’s doing. It’s rather impressive to see in action. And—although there’s definitely much more polishing to be done—what’s already there goes a long way towards (among other things) giving ChatGPT the ability to deliver accurate, curated knowledge and data—as well as correct, nontrivial computations.
这里有一件特别重要的事,那就是 ChatGPT 并不只是利用我们来完成显示网页内容这样的 "无用功"。相反,我们的行为更像是 ChatGPT 真正的 "大脑植入"--它在需要的时候向我们提问,而我们给出回应,它就能把这些回应编织回它正在做的事情中。这样的操作令人印象深刻。而且,虽然还有很多工作要做,但已有的成果已经在很大程度上帮助 ChatGPT 实现了提供准确、经过整理的知识和数据的能力,以及正确、非复杂计算的能力。

But there’s more too. We already saw examples where we were able to provide custom-created visualizations to ChatGPT. And with our computation capabilities we’re routinely able to make “truly original” content—computations that have simply never been done before. And there’s something else: while “pure ChatGPT” is restricted to things it “learned during its training”, by calling us it can get up-to-the-moment data.
但还有更多。我们已经看到过一些例子,我们能够为 ChatGPT 提供定制的可视化内容。凭借我们的计算能力,我们经常能做出 "真正原创 "的内容--以前从未做过的计算。还有一点:虽然 "纯 ChatGPT "仅限于它 "在训练中学到的东西",但通过呼叫我们,它可以获得最新的数据。

This can be based on our real-time data feeds (here we’re getting called twice; once for each place):

Compare current temperature in Timbuktu and New York

Or it can be based on “science-style” predictive computations:
也可以基于 "科学式 "的预测计算:

How far is it to Jupiter right now?

What is the configuration of the moons of Jupiter now?

Or both: 或者两者兼而有之:

Where is the ISS in the sky from NYC?

Some of the Things You Can Do

There’s a lot that Wolfram|Alpha and Wolfram Language cover:
Wolfram|Alpha 和 Wolfram 语言涵盖了很多内容:

Wolfram|Alpha and Wolfram Language content areas

And now (almost) all of this is accessible to ChatGPT—opening up a tremendous breadth and depth of new possibilities. And to give some sense of these, here are a few (simple) examples:
现在(几乎)所有这些都可以通过 ChatGPT 实现--为我们带来了广度和深度都极大的新可能性。下面是几个(简单的)例子,让我们对这些可能性有一些了解:

Click to enlargeAlgorithms 算法Click to enlargeAudio 音频Click to enlargeCurrency conversion 货币兑换Click to enlargeFunction plotting 函数绘图Click to enlargeGenealogy 家谱Click to enlargeGeo data 地理数据Click to enlargeMathematical functions 数学函数Click to enlargeMusic 音乐Click to enlargePokémon 神奇宝贝

A Modern Human + AI Workflow

ChatGPT is built to be able to have back-and-forth conversation with humans. But what can one do when that conversation has actual computation and computational knowledge in it? Here’s an example. Start by asking a “world knowledge” question:
ChatGPT 可以与人类进行来回对话。但是,当对话中包含实际计算和计算知识时,我们能做些什么呢?下面是一个例子。首先提出一个 "世界知识 "问题:

Beef production query

And, yes, by “opening the box” one can check that the right question was asked to us, and what the raw response we gave was. But now we can go on and ask for a map:
是的,通过 "打开盒子",我们可以检查是否向我们提出了正确的问题,以及我们的原始回答是什么。但现在,我们可以继续询问地图:

Make a map

But there are “prettier” map projections we could have used. And with ChatGPT’s “general knowledge” based on its reading of the web, etc. we can just ask it to use one:
但我们可以使用更 "漂亮 "的地图投影。有了 ChatGPT 基于网络阅读等的 "常识",我们就可以要求它使用其中一种:

Use a prettier map projection

But maybe we want a heat map instead. Again, we can just ask it to produce this—underneath using our technology:

Show as a heat map

Let’s change the projection again, now asking it again to pick it using its “general knowledge”:
让我们再次改变投影,现在再次让它用 "常识 "来选择:

Use UN logo map projection

And, yes, it got the projection “right”. But not the centering. So let’s ask it to fix that:
是的,它的投影 "正确"。但没有对中。那就请它来解决这个问题吧:

Center map projection on North Pole

OK, so what do we have here? We’ve got something that we “collaborated” to build. We incrementally said what we wanted; the AI (i.e. ChatGPT + Wolfram) progressively built it. But what did we actually get? Well, it’s a piece of Wolfram Language code—which we could see by “opening the box”, or just asking ChatGPT for:
好吧,我们有什么?这是我们 "合作 "完成的。我们逐步说出了我们想要的东西;人工智能(即 ChatGPT + Wolfram)逐步构建了它。但我们到底得到了什么?这是一段 Wolfram 语言代码--我们可以通过 "打开盒子 "或直接向 ChatGPT 索取来查看:

Show the code used

If we copy the code out into a Wolfram Notebook, we can immediately run it, and we find it has a nice “luxury feature”—as ChatGPT claimed in its description, there are dynamic tooltips giving the name of each country:
如果我们将代码复制到 Wolfram Notebook 中,就可以立即运行它,我们会发现它有一个很好的 "豪华功能"--正如 ChatGPT 在描述中所说,它有动态工具提示,给出每个国家的名称:

(And, yes, it’s a slight pity that this code just has explicit numbers in it, rather than the original symbolic query about beef production. And this happened because ChatGPT asked the original question to Wolfram|Alpha, then fed the results to Wolfram Language. But I consider the fact that this whole sequence works at all extremely impressive.)
(是的,稍有遗憾的是,这段代码中只有明确的数字,而不是关于牛肉产量的原始符号查询。这是因为 ChatGPT 向 Wolfram|Alpha 提出了原始问题,然后将结果反馈给了 Wolfram 语言。但我认为,这整个序列能正常运行这一事实给人留下了深刻印象。)

How It Works—and Wrangling the AI

What’s happening “under the hood” with ChatGPT and the Wolfram plugin? Remember that the core of ChatGPT is a “large language model” (LLM) that’s trained from the web, etc. to generate a “reasonable continuation” from any text it’s given. But as a final part of its training ChatGPT is also taught how to “hold conversations”, and when to “ask something to someone else”—where that “someone” might be a human, or, for that matter, a plugin. And in particular, it’s been taught when to reach out to the Wolfram plugin.
ChatGPT 和 Wolfram 插件的 "引擎盖 "下发生了什么?请记住,ChatGPT 的核心是一个 "大型语言模型"(LLM ),该模型通过网络等进行训练,可以从给定的任何文本中生成 "合理的续篇"。但作为其训练的最后一部分,ChatGPT 还学习如何 "进行对话",以及何时 "向他人提出请求"--这个 "他人 "可能是人类,也可能是插件。尤其是,它还学会了何时向 Wolfram 插件求助。

The Wolfram plugin actually has two entry points: a Wolfram|Alpha one and a Wolfram Language one. The Wolfram|Alpha one is in a sense the “easier” for ChatGPT to deal with; the Wolfram Language one is ultimately the more powerful. The reason the Wolfram|Alpha one is easier is that what it takes as input is just natural language—which is exactly what ChatGPT routinely deals with. And, more than that, Wolfram|Alpha is built to be forgiving—and in effect to deal with “typical human-like input”, more or less however messy that may be.
Wolfram 插件实际上有两个入口:一个是 Wolfram|Alpha 入口,另一个是 Wolfram 语言入口。从某种意义上说,Wolfram|Alpha 插件对于 ChatGPT 来说 "更容易 "处理;而 Wolfram 语言插件最终会变得更强大。Wolfram|Alpha 语言更容易处理的原因是,它的输入内容只是自然语言,而这正是 ChatGPT 经常处理的内容。不仅如此,Wolfram|Alpha 还具有宽容度--实际上是为了处理 "典型的类人输入",无论这种输入多么杂乱无章。

Wolfram Language, on the other hand, is set up to be precise and well defined—and capable of being used to build arbitrarily sophisticated towers of computation. Inside Wolfram|Alpha, what it’s doing is to translate natural language to precise Wolfram Language. In effect it’s catching the “imprecise natural language” and “funneling it” into precise Wolfram Language.
另一方面,Wolfram 语言的设定是精确和定义明确的,并且能够用于构建任意复杂的计算塔。在 Wolfram|Alpha 中,它要做的就是将自然语言翻译成精确的 Wolfram 语言。实际上,它是在捕捉 "不精确的自然语言",并将其 "漏斗化 "为精确的 Wolfram 语言。

When ChatGPT calls the Wolfram plugin it often just feeds natural language to Wolfram|Alpha. But ChatGPT has by this point learned a certain amount about writing Wolfram Language itself. And in the end, as we’ll discuss later, that’s a more flexible and powerful way to communicate. But it doesn’t work unless the Wolfram Language code is exactly right. To get it to that point is partly a matter of training. But there’s another thing too: given some candidate code, the Wolfram plugin can run it, and if the results are obviously wrong (like they generate lots of errors), ChatGPT can attempt to fix it, and try running it again. (More elaborately, ChatGPT can try to generate tests to run, and change the code if they fail.)
当 ChatGPT 调用 Wolfram 插件时,它通常只是向 Wolfram|Alpha 输入自然语言。但此时,ChatGPT 已经掌握了一定的 Wolfram 语言编写技巧。最终,正如我们稍后将讨论的,这是一种更灵活、更强大的交流方式。但是,除非 Wolfram 语言代码完全正确,否则这种方式是行不通的。要做到这一点,部分原因在于训练。但还有另一件事:给定一些候选代码,Wolfram 插件可以运行它,如果结果明显错误(比如产生大量错误),ChatGPT 可以尝试修复它,并再次尝试运行它。(更详细地说,ChatGPT 可以尝试生成要运行的测试,并在测试失败时修改代码)。

There’s more to be developed here, but already one sometimes sees ChatGPT go back and forth multiple times. It might be rewriting its Wolfram|Alpha query (say simplifying it by taking out irrelevant parts), or it might be deciding to switch between Wolfram|Alpha and Wolfram Language, or it might be rewriting its Wolfram Language code. Telling it how to do these things is a matter for the initial “plugin prompt”.
这里还有更多有待开发的地方,但有时我们已经可以看到 ChatGPT 来回切换多次。它可能会重写 Wolfram|Alpha 查询(比如通过删除无关部分来简化查询),也可能决定在 Wolfram|Alpha 和 Wolfram 语言之间切换,还可能重写 Wolfram 语言代码。告诉它如何做这些事情是初始 "插件提示 "的事情。

And writing this prompt is a strange activity—perhaps our first serious experience of trying to “communicate with an alien intelligence”. Of course it helps that the “alien intelligence” has been trained with a vast corpus of human-written text. So, for example, it knows English (a bit like all those corny science fiction aliens…). And we can tell it things like “If the user input is in a language other than English, translate to English and send an appropriate query to Wolfram|Alpha, then provide your response in the language of the original input.”
撰写这则提示是一项奇怪的活动--也许是我们第一次尝试 "与外星智能体交流 "的严肃体验。当然,"外星智能体 "接受过大量人类书写文本的训练,这也会有所帮助。因此,举例来说,它懂英语(有点像那些老套的科幻小说中的外星人......)。我们可以告诉它:"如果用户输入的语言不是英语,请翻译成英语,并向 Wolfram|Alpha 发送适当的查询,然后用原始输入的语言作出回应。

Sometimes we’ve found we have to be quite insistent (note the all caps): “When writing Wolfram Language code, NEVER use snake case for variable names; ALWAYS use camel case for variable names.” And even with that insistence, ChatGPT will still sometimes do the wrong thing. The whole process of “prompt engineering” feels a bit like animal wrangling: you’re trying to get ChatGPT to do what you want, but it’s hard to know just what it will take to achieve that.
有时,我们发现我们必须非常坚持(注意大写字母):"在编写 Wolfram 语言代码时,变量名绝对不要使用蛇形大小写;变量名一定要使用驼形大小写"。即便如此,ChatGPT 有时还是会做错事。整个 "提示工程 "过程给人的感觉有点像驯兽:你试图让 ChatGPT 做你想做的事,但很难知道要怎么做才能达到目的。

Eventually this will presumably be handled in training or in the prompt, but as of right now, ChatGPT sometimes doesn’t know when the Wolfram plugin can help. For example, ChatGPT guesses that this is supposed to be a DNA sequence, but (at least in this session) doesn’t immediately think the Wolfram plugin can do anything with it:
最终,这可能会在训练或提示中得到处理,但就目前而言,ChatGPT 有时并不知道 Wolfram 插件何时能提供帮助。例如,ChatGPT 猜到这应该是一个 DNA 序列,但(至少在这个会话中)并没有立即想到 Wolfram 插件能用它做什么:

DNA strand input

Say “Use Wolfram”, though, and it’ll send it to the Wolfram plugin, which indeed handles it nicely:
不过,如果说 "使用 Wolfram",它就会将信息发送到 Wolfram 插件,后者会很好地处理它:

Use Wolfram

(You may sometimes also want to say specifically “Use Wolfram|Alpha” or “Use Wolfram Language”. And particularly in the Wolfram Language case, you may want to look at the actual code it sent, and tell it things like not to use functions whose names it came up with, but which don’t actually exist.)
(有时您可能还需要特别说明 "使用 Wolfram|Alpha "或 "使用 Wolfram 语言")。特别是在 Wolfram 语言的情况下,您可能需要查看它发送的实际代码,并告诉它不要使用它想出了名字但实际上并不存在的函数等)。

When the Wolfram plugin is given Wolfram Language code, what it does is basically just to evaluate that code, and return the result—perhaps as a graphic or math formula, or just text. But when it’s given Wolfram|Alpha input, this is sent to a special Wolfram|Alpha “for LLMs” API endpoint, and the result comes back as text intended to be “read” by ChatGPT, and effectively used as an additional prompt for further text ChatGPT is writing. Take a look at this example:
当 Wolfram 插件收到 Wolfram 语言代码时,它的工作基本上只是对代码进行评估,然后返回结果--可能是图形或数学公式,也可能只是文本。但如果给它的是 Wolfram|Alpha 输入,它就会将其发送到一个特殊的 Wolfram|Alpha "forLLMs" API 端点,结果会以文本形式返回,供 ChatGPT "读取",并有效地用作 ChatGPT 编写进一步文本的额外提示。请看这个例子:

Ocean depth query

The result is a nice piece of text containing the answer to the question asked, along with some other information ChatGPT decided to include. But “inside” we can see what the Wolfram plugin (and the Wolfram|Alpha “LLM endpoint”) actually did:
结果是一段漂亮的文字,其中包含对所提问题的回答,以及 ChatGPT 决定包含的其他信息。但在 "内部",我们可以看到 Wolfram 插件(以及 Wolfram|Alpha "LLM endpoint")的实际作用:

Ocean depth code

There’s quite a bit of additional information there (including some nice pictures!). But ChatGPT “decided” just to pick out a few pieces to include in its response.
那里有很多补充信息(包括一些漂亮的图片!)。不过,ChatGPT "决定 "只挑出几条写进它的回复中。

By the way, something to emphasize is that if you want to be sure you’re getting what you think you’re getting, always check what ChatGPT actually sent to the Wolfram plugin—and what the plugin returned. One of the important things we’re adding with the Wolfram plugin is a way to “factify” ChatGPT output—and to know when ChatGPT is “using its imagination”, and when it’s delivering solid facts.
顺便强调一下,如果您想确保您得到的是您认为的结果,请务必检查 ChatGPT 实际发送给 Wolfram 插件的内容以及插件返回的内容。我们在 Wolfram 插件中添加的一个重要功能是 "事实化" ChatGPT 输出--知道 ChatGPT 什么时候在 "发挥想象力",什么时候在提供确凿的事实。

Sometimes in trying to understand what’s going on it’ll also be useful just to take what the Wolfram plugin was sent, and enter it as direct input on the Wolfram|Alpha website, or in a Wolfram Language system (such as the Wolfram Cloud).
有时,在试图了解发生了什么事情时,将 Wolfram 插件发送的内容作为直接输入输入到 Wolfram|Alpha 网站或 Wolfram 语言系统(如 Wolfram 云)中也是非常有用的。

Wolfram Language as the Language for Human-AI Collaboration

One of the great (and, frankly, unexpected) things about ChatGPT is its ability to start from a rough description, and generate from it a polished, finished output—such as an essay, letter, legal document, etc. In the past, one might have tried to achieve this “by hand” by starting with “boilerplate” pieces, then modifying them, “gluing” them together, etc. But ChatGPT has all but made this process obsolete. In effect, it’s “absorbed” a huge range of boilerplate from what it’s “read” on the web, etc.—and now it typically does a good job at seamlessly “adapting it” to what you need.
ChatGPT 的一大亮点(坦率地说,出乎意料)是它能从粗略的描述开始,生成精炼的成品输出,如文章、信件、法律文件等。过去,人们可能会从 "模板 "开始,然后进行修改、"粘合 "等,试图 "手工 "实现这一目标。但 ChatGPT 几乎淘汰了这一过程。实际上,它已经从网上 "阅读 "到的大量模板中 "吸收 "了一部分--现在,它通常能很好地根据您的需要进行无缝 "调整"。

So what about code? In traditional programming languages writing code tends to involve a lot of “boilerplate work”—and in practice many programmers in such languages spend lots of their time building up their programs by copying big slabs of code from the web. But now, suddenly, it seems as if ChatGPT can make much of this obsolete. Because it can effectively put together essentially any kind of boilerplate code automatically—with only a little “human input”.
那么代码呢?在传统编程语言中,编写代码往往涉及大量的 "模板工作"--在实践中,许多使用此类语言的程序员花费大量时间从网上复制大量代码来构建程序。但现在,ChatGPT 似乎突然可以让这些工作变得不再重要。因为它可以有效地自动拼凑出任何类型的模板代码--只需少量的 "人工输入"。

Of course, there has to be some human input—because otherwise ChatGPT wouldn’t know what program it was supposed to write. But—one might wonder—why does there have to be “boilerplate” in code at all? Shouldn’t one be able to have a language where—just at the level of the language itself—all that’s needed is a small amount of human input, without any of the “boilerplate dressing”?
当然,必须要有人工输入,否则 ChatGPT 就不知道自己应该写什么程序。但是,人们可能会问,为什么代码中一定要有 "模板 "呢?难道不应该有一种语言,只需要少量的人工输入,而不需要任何 "模板修饰 "吗?

Well, here’s the issue. Traditional programming languages are centered around telling a computer what to do in the computer’s terms: set this variable, test that condition, etc. But it doesn’t have to be that way. And instead one can start from the other end: take things people naturally think in terms of, then try to represent these computationally—and effectively automate the process of getting them actually implemented on a computer.

Well, this is what I’ve now spent more than four decades working on. And it’s the foundation of what’s now Wolfram Language—which I now feel justified in calling a “full-scale computational language”. What does this mean? It means that right in the language there’s a computational representation for both abstract and real things that we talk about in the world, whether those are graphs or images or differential equations—or cities or chemicals or companies or movies.
这就是我花了四十多年时间研究的东西。这也是现在的 Wolfram 语言的基础--我现在觉得有理由称其为 "全面的计算语言"。这意味着什么?这意味着,就在这门语言中,我们所谈论的抽象事物和现实事物都有了一个计算表征,无论这些事物是图形、图像还是微分方程,或是城市、化学品、公司或电影。

Why not just start with natural language? Well, that works up to a point—as the success of Wolfram|Alpha demonstrates. But once one’s trying to specify something more elaborate, natural language becomes (like “legalese”) at best unwieldy—and one really needs a more structured way to express oneself.
为什么不从自然语言入手呢?这在一定程度上是可行的--Wolfram|Alpha 的成功就证明了这一点。但是,一旦人们试图指定一些更复杂的东西,自然语言(就像 "法律用语 "一样)充其量也变得笨拙,人们确实需要一种更有条理的方式来表达自己。

There’s a big example of this historically, in mathematics. Back before about 500 years ago, pretty much the only way to “express math” was in natural language. But then mathematical notation was invented, and math took off—with the development of algebra, calculus, and eventually all the various mathematical sciences.
在历史上,数学就是一个很好的例子。在大约 500 年前,"表达数学 "的唯一方式几乎就是自然语言。但后来发明了数学符号,数学也随之发展起来--代数、微积分,以及最终的各种数学科学。

My big goal with the Wolfram Language is to create a computational language that can do the same kind of thing for anything that can be “expressed computationally”. And to achieve this we’ve needed to build a language that both automatically does a lot of things, and intrinsically knows a lot of things. But the result is a language that’s set up so that people can conveniently “express themselves computationally”, much as traditional mathematical notation lets them “express themselves mathematically”. And a critical point is that—unlike traditional programming languages—Wolfram Language is intended not just for computers, but also for humans, to read. In other words, it’s intended as a structured way of “communicating computational ideas”, not just to computers, but also to humans.
我对沃尔夫拉姆语言的最大目标是创建一种计算语言,它可以为任何可以 "计算表达 "的事物做同样的事情。为了实现这一目标,我们需要创建一种语言,它既能自动完成很多事情,又能从本质上了解很多事情。但结果是,这种语言的设置让人们可以方便地 "用计算来表达自己",就像传统的数学符号让人们 "用数学来表达自己 "一样。与传统编程语言不同,关键的一点是,Wolfram 语言不仅供计算机阅读,也供人类阅读。换句话说,它是一种结构化的 "交流计算思想 "的方式,不仅面向计算机,也面向人类。

But now—with ChatGPT—this suddenly becomes even more important than ever before. Because—as we began to see above—ChatGPT can work with Wolfram Language, in a sense building up computational ideas just using natural language. And part of what’s then critical is that Wolfram Language can directly represent the kinds of things we want to talk about. But what’s also critical is that it gives us a way to “know what we have”—because we can realistically and economically read Wolfram Language code that ChatGPT has generated.
但现在,有了 ChatGPT,这一点突然变得比以往任何时候都更加重要。因为--正如我们在上文所看到的--ChatGPT 可以与 Wolfram 语言协同工作,在某种意义上,只需使用自然语言就能建立起计算思想。沃尔夫拉姆语言能够直接表达我们想要讨论的内容,这一点至关重要。但同样重要的是,它为我们提供了一种 "了解我们所拥有的 "的方法--因为我们可以现实而经济地阅读 ChatGPT 生成的 Wolfram 语言代码。

The whole thing is beginning to work very nicely with the Wolfram plugin in ChatGPT. Here’s a simple example, where ChatGPT can readily generate a Wolfram Language version of what it’s being asked:
有了 ChatGPT 中的 Wolfram 插件,整个过程开始变得非常流畅。下面是一个简单的例子,ChatGPT 可以轻松生成 Wolfram 语言版本的问题:

Make a plot of Roman numerals

Join the points

Show the code

And the critical point is that the “code” is something one can realistically expect to read (if I were writing it, I would use the slightly more compact RomanNumeral function):
最关键的一点是,"代码 "是人们可以实际阅读的(如果由我来写,我会使用更简洁的 RomanNumeral 函数):

Here’s another example: 这里还有一个例子:

Make a histogram
Show the code

I might have written the code a little differently, but this is again something very readable:

It’s often possible to use a pidgin of Wolfram Language and English to say what you want:
通常可以使用 Wolfram 语言和英语的混合语言来表达您的意思:

Create table
Make ArrayPlot

Here’s an example where ChatGPT is again successfully constructing Wolfram Language—and conveniently shows it to us so we can confirm that, yes, it’s actually computing the right thing:
下面是一个例子,ChatGPT 再次成功地构建了 Wolfram 语言,并方便地向我们展示,以便我们确认,是的,它确实在计算正确的东西:

Alkali metal query

And, by the way, to make this work it’s critical that the Wolfram Language is in a sense “self-contained”. This piece of code is just standard generic Wolfram Language code; it doesn’t depend on anything outside, and if you wanted to, you could look up the definitions of everything that appears in it in the Wolfram Language documentation.
顺便说一句,要实现这一点,Wolfram 语言在某种意义上的 "自足性 "至关重要。这段代码只是标准的通用 Wolfram 语言代码;它并不依赖于外部的任何东西,如果你愿意,你可以在 Wolfram 语言文档中查找其中所有内容的定义。

OK, one more example:

European flags query

Obviously ChatGPT had trouble here. But—as it suggested—we can just run the code it generated, directly in a notebook. And because Wolfram Language is symbolic, we can explicitly see results at each step:
显然,ChatGPT 在这里遇到了麻烦。但正如它所建议的,我们可以直接在笔记本中运行它生成的代码。由于 Wolfram 语言是符号语言,我们可以清楚地看到每一步的结果:

So close! Let’s help it a bit, telling it we need an actual list of European countries:

And there’s the result! Or at least, a result. Because when we look at this computation, it might not be quite what we want. For example, we might want to pick out multiple dominant colors per country, and see if any of them are close to purple. But the whole Wolfram Language setup here makes it easy for us to “collaborate with the AI” to figure out what we want, and what to do.
这就是结果!或者至少是一个结果。因为当我们看到这个计算结果时,它可能并不完全是我们想要的。例如,我们可能想为每个国家挑选出多种主色调,看看其中是否有接近紫色的颜色。但是,Wolfram 语言的整个设置让我们可以轻松地 "与人工智能合作",找出我们想要的结果,以及该怎么做。

So far we’ve basically been starting with natural language, and building up Wolfram Language code. But we can also start with pseudocode, or code in some low-level programming language. And ChatGPT tends to do a remarkably good job of taking such things and producing well-written Wolfram Language code from them. The code isn’t always exactly right. But one can always run it (e.g. with the Wolfram plugin) and see what it does, potentially (courtesy of the symbolic character of Wolfram Language) line by line. And the point is that the high-level computational language nature of the Wolfram Language tends to allow the code to be sufficiently clear and (at least locally) simple that (particularly after seeing it run) one can readily understand what it’s doing—and then potentially iterate back and forth on it with the AI.
到目前为止,我们基本上都是从自然语言开始,然后建立 Wolfram 语言代码。但我们也可以从伪码或一些低级编程语言的代码开始。而 ChatGPT 往往能很好地处理这些代码,并从中生成编写精良的 Wolfram 语言代码。这些代码并不总是完全正确的。但我们总是可以运行它(例如使用 Wolfram 插件),并逐行查看它所做的事情(Wolfram 语言的符号特性)。关键在于,Wolfram 语言的高级计算语言特性往往能让代码足够清晰和(至少在局部)简单,以至于(尤其是在看到它运行之后)人们可以很容易地理解它在做什么--然后有可能与人工智能一起反复推敲。

When what one’s trying to do is sufficiently simple, it’s often realistic to specify it—at least if one does it in stages—purely with natural language, using Wolfram Language “just” as a way to see what one’s got, and to actually be able to run it. But it’s when things get more complicated that Wolfram Language really comes into its own—providing what’s basically the only viable human-understandable-yet-precise representation of what one wants.
当一个人想要做的事情非常简单时,通常可以纯粹使用自然语言来指定它--至少是分阶段来指定,使用 Wolfram 语言 "只是 "作为一种查看自己得到了什么以及实际能够运行它的方法。但当事情变得更加复杂时,Wolfram 语言才真正发挥了它的作用--它基本上是唯一可行的、人类可以理解但又精确的表述方式。

And when I was writing my book An Elementary Introduction to the Wolfram Language this became particularly obvious. At the beginning of the book I was easily able to make up exercises where I described what was wanted in English. But as things started getting more complicated, this became more and more difficult. As a “fluent” user of Wolfram Language I usually immediately knew how to express what I wanted in Wolfram Language. But to describe it purely in English required something increasingly involved and complicated, that read like legalese.
在我撰写《Wolfram 语言入门》一书时,这一点变得尤为明显。在书的开头,我很容易就能编出用英语描述想要的东西的练习。但随着事情变得越来越复杂,这变得越来越困难。作为 Wolfram 语言的 "流利 "用户,我通常能立即知道如何用 Wolfram 语言表达我想要的东西。但如果纯粹用英语来描述,则需要越来越多的复杂内容,读起来就像法律术语。

But, OK, so you specify something using Wolfram Language. Then one of the remarkable things ChatGPT is often able to do is to recast your Wolfram Language code so that it’s easier to read. It doesn’t (yet) always get it right. But it’s interesting to see it make different tradeoffs from a human writer of Wolfram Language code. For example, humans tend to find it difficult to come up with good names for things, making it usually better (or at least less confusing) to avoid names by having sequences of nested functions. But ChatGPT, with its command of language and meaning, has a fairly easy time making up reasonable names. And although it’s something I, for one, did not expect, I think using these names, and “spreading out the action”, can often make Wolfram Language code even easier to read than it was before, and indeed read very much like a formalized analog of natural language—that we can understand as easily as natural language, but that has a precise meaning, and can actually be run to generate computational results.
但是,好吧,您使用 Wolfram 语言指定了一些东西。那么,ChatGPT 经常能做的一件了不起的事,就是重新编写 Wolfram 语言代码,使其更易于阅读。它并不总是能做到这一点。但有趣的是,它能做出与人类编写 Wolfram 语言代码者不同的取舍。例如,人类往往很难为事物取一个好名字,因此通常最好(或至少不那么混乱)通过嵌套函数序列来避免取名字。但是 ChatGPT,凭借其对语言和意义的掌控,可以相当轻松地为事物取一个合理的名字。虽然这是我始料未及的,但我认为使用这些名称和 "分散操作",往往能让 Wolfram 语言代码变得比以前更容易阅读,而且读起来更像是自然语言的形式化模拟--我们可以像自然语言一样容易理解,但它有精确的含义,并能实际运行以生成计算结果。

Cracking Some Old Chestnuts

If you “know what computation you want to do”, and you can describe it in a short piece of natural language, then Wolfram|Alpha is set up to directly do the computation, and present the results in a way that is “visually absorbable” as easily as possible. But what if you want to describe the result in a narrative, textual essay? Wolfram|Alpha has never been set up to do that. But ChatGPT is.
如果您 "知道自己想要做什么计算",并能用简短的自然语言进行描述,Wolfram|Alpha 就能直接完成计算,并以 "可视化吸收 "的方式轻松呈现计算结果。但如果您想用叙述性的文字文章来描述结果呢?Wolfram|Alpha从来就没有这样的功能。但 ChatGPT 可以。

Here’s a result from Wolfram|Alpha:
下面是 Wolfram|Alpha 的结果:

Altair versus Betelgeuse Wolfram|Alpha query

And here within ChatGPT we’re asking for this same Wolfram|Alpha result, but then telling ChatGPT to “make an essay out of it”:
而在 ChatGPT 中,我们要求得到同样的 Wolfram|Alpha 结果,然后告诉 ChatGPT "把它写成一篇文章":

Altair-Betelgeuse essay

Another “old chestnut” for Wolfram|Alpha is math word problems. Given a “crisply presented” math problem, Wolfram|Alpha is likely to do very well at solving it. But what about a “woolly” word problem? Well, ChatGPT is pretty good at “unraveling” such things, and turning them into “crisp math questions”—which then the Wolfram plugin can now solve. Here’s an example:
Wolfram|Alpha 的另一个 "老把戏 "是数学文字题。如果给出一个 "清晰呈现 "的数学问题,Wolfram|Alpha 很可能会很好地解决它。但 "毛糙 "的文字问题呢?那么,ChatGPT 就能很好地 "解开 "这类问题,并将其转化为 "清晰的数学问题"--Wolfram 插件现在就能解决这些问题。下面是一个例子:

Math word problem

Here’s a slightly more complicated case, including a nice use of “common sense” to recognize that the number of turkeys cannot be negative:
这里有一个稍微复杂一点的案例,其中包括一个很好的 "常识 "运用,即火鸡的数量不可能是负数:

Math word problem

Beyond math word problems, another “old chestnut” now addressed by ChatGPT + Wolfram is what physicists tend to call “Fermi problems”: order-of-magnitude estimates that can be made on the basis of quantitative knowledge about the world. Here’s an example:
除了数学文字问题,ChatGPT + Wolfram 现在还能解决另一个 "老栗子",即物理学家倾向于称之为 "费米问题 "的问题:根据对世界的定量知识可以做出的数量级估计。这里有一个例子:

Order-of-magnitude query

How to Get Involved

ChatGPT + Wolfram is something very new—really a completely new kind of technology. And as happens whenever a new kind of technology arrives, it’s opening up tremendous new opportunities. Some of these we can already begin to to see—but lots of others will emerge over the weeks, months and years to come.
ChatGPT + Wolfram 是一种全新的技术。每当一种新技术出现时,都会带来巨大的新机遇。我们已经可以看到其中的一些机会,但在未来的数周、数月和数年中,还会有更多的机会出现。

So how can you get involved in what promises to be an exciting period of rapid technological—and conceptual—growth? The first thing is just to explore ChatGPT + Wolfram. ChatGPT and Wolfram are each on their own vast systems; the combination of them is something that it’ll take years to fully plumb. But the first step is just to get a sense of what’s possible.
那么,如何才能参与到这一令人兴奋的技术和概念快速发展时期呢?第一件事就是探索 ChatGPT + Wolfram。ChatGPT 和 Wolfram 各自拥有庞大的系统,两者的结合需要数年时间才能完全摸清。但第一步只是了解可能发生的事情。

Find examples. Share them. Try to identify successful patterns of usage. And, most of all, try to find workflows that deliver the highest value. Those workflows could be quite elaborate. But they could also be quite simple—cases where once one sees what can be done, there’s an immediate “aha”.
查找实例。分享它们。尝试找出成功的使用模式。最重要的是,努力找到能带来最高价值的工作流程。这些工作流程可能非常复杂。但它们也可能非常简单--人们一旦看到可以做什么,就会立即产生 "啊哈 "的感觉。

How can you best implement a workflow? Well, we’re trying to work out the best workflows for that. Within Wolfram Language we’re setting up flexible ways to call on things like ChatGPT, both purely programmatically, and in the context of the notebook interface.
如何才能最好地实施工作流程?我们正在努力找出最佳的工作流程。在 Wolfram 语言中,我们正在设置灵活的方式,以调用 ChatGPT 等工具,既可以纯编程方式,也可以在笔记本界面中使用。

But what about from the ChatGPT side? Wolfram Language has a very open architecture, where a user can add or modify pretty much whatever they want. But how can you use this from ChatGPT? One thing is just to tell ChatGPT to include some specific piece of “initial” Wolfram Language code (maybe together with documentation)—then use something like the pidgin above to talk to ChatGPT about the functions or other things you’ve defined in that initial code.
但从 ChatGPT 的角度来看呢?Wolfram 语言拥有非常开放的架构,用户可以添加或修改他们想要的任何内容。但如何在 ChatGPT 中使用呢?一种方法是告诉 ChatGPT 包含一些特定的 "初始 "Wolfram 语言代码(也许连同文档)--然后使用类似上面的 pidgin 来与 ChatGPT 讨论你在初始代码中定义的函数或其他东西。

We’re planning to build increasingly streamlined tools for handling and sharing Wolfram Language code for use through ChatGPT. But one approach that already works is to submit functions for publication in the Wolfram Function Repository, then—once they’re published—refer to these functions in your conversation with ChatGPT.
我们正计划建立越来越精简的工具,用于处理和分享 Wolfram 语言代码,供 ChatGPT 使用。但有一种方法已经在使用,那就是提交函数到 Wolfram 函数库,一旦发布,您就可以在与 ChatGPT 的对话中引用这些函数。

OK, but what about within ChatGPT itself? What kind of prompt engineering should you do to best interact with the Wolfram plugin? Well, we don’t know yet. It’s something that has to be explored—in effect as an exercise in AI education or AI psychology. A typical approach is to give some “pre-prompts” earlier in your ChatGPT session, then hope it’s “still paying attention” to those later on. (And, yes, it has a limited “attention span”, so sometimes things have to get repeated.)
好吧,那么 ChatGPT 本身呢?你应该做什么样的提示工程才能更好地与 Wolfram 插件互动呢?我们还不知道。这实际上是人工智能教育或人工智能心理学的一项练习,必须对此进行探索。一种典型的方法是在 ChatGPT 会话的前半部分给出一些 "预提示",然后希望它稍后会 "继续关注 "这些提示。(没错,人工智能的 "注意力 "是有限的,所以有时必须重复一些内容)。

We’ve tried to give an overall prompt to tell ChatGPT basically how to use the Wolfram plugin—and we fully expect this prompt to evolve rapidly, as we learn more, and as the ChatGPT LLM is updated. But you can add your own general pre-prompts, saying things like “When using Wolfram always try to include a picture” or “Use SI units” or “Avoid using complex numbers if possible”.
我们已经尝试给出一个总体提示,告诉 ChatGPT 如何使用 Wolfram 插件--我们完全可以预见,随着我们了解更多,以及 ChatGPTLLM 的更新,这个提示也会迅速发展。不过,您也可以添加自己的通用提示,例如 "使用 Wolfram 时,请尽量附上图片 "或 "使用国际单位制单位 "或 "尽可能避免使用复数"。

You can also try setting up a pre-prompt that essentially “defines a function” right in ChatGPT—something like: “If I give you an input consisting of a number, you are to use Wolfram to draw a polygon with that number of sides”. Or, more directly, “If I give you an input consisting of numbers you are to apply the following Wolfram function to that input …”, then give some explicit Wolfram Language code.
您还可以尝试在 ChatGPT 中设置一个预提示,基本上就是 "定义一个函数"--比如说:例如:"如果我给你一个由数字组成的输入,你将使用 Wolfram 绘制一个多边形,其边数为该数字"。或者更直接地说:"如果我给你一个由数字组成的输入,你要对该输入应用以下 Wolfram 函数......",然后给出一些明确的 Wolfram 语言代码。

But these are very early days, and no doubt there’ll be other powerful mechanisms discovered for “programming” ChatGPT + Wolfram. And I think we can confidently expect that the next little while will be an exciting time of high growth, where there’s lots of valuable “low-hanging fruit” to be picked by those who chose to get involved.
但现在还为时尚早,毫无疑问,ChatGPT + Wolfram 还会发现其他强大的 "编程 "机制。我认为,我们可以满怀信心地期待,接下来的一段时间将是一个令人兴奋的高速发展期,有很多有价值的 "低垂果实 "等着那些选择参与其中的人去摘取。

Some Background & Outlook

Even a week ago it wasn’t clear what ChatGPT + Wolfram was going to be like—or how well it was going to work. But these things that are now moving so quickly are built on decades of earlier development. And in some ways the arrival of ChatGPT + Wolfram finally marries the two main approaches historically taken to AI—that have long been viewed as disjoint and incompatible.
甚至在一周前,我们还不清楚 ChatGPT + Wolfram 会是什么样子,也不知道它的效果如何。但是,这些现在进展如此迅速的东西都是建立在数十年的早期开发基础之上的。从某种程度上说,ChatGPT + Wolfram 的问世最终将历史上人工智能的两种主要方法结合在了一起,而这两种方法长期以来一直被认为是互不关联、互不兼容的。

ChatGPT is basically a very large neural network, trained to follow the “statistical” patterns of text it’s seen on the web, etc. The concept of neural networks—in a form surprisingly close to what’s used in ChatGPT—originated all the way back in the 1940s. But after some enthusiasm in the 1950s, interest waned. There was a resurgence in the early 1980s (and indeed I myself first looked at neural nets then). But it wasn’t until 2012 that serious excitement began to build about what might be possible with neural nets. And now a decade later—in a development whose success came as a big surprise even to those involved—we have ChatGPT.
ChatGPT 基本上是一个非常庞大的神经网络,经过训练后,它可以根据自己在网络上看到的文本等 "统计 "模式进行搜索。神经网络的概念起源于 20 世纪 40 年代,其形式与 ChatGPT 中使用的非常接近。但在 20 世纪 50 年代,人们对神经网络的热情有所减退。20 世纪 80 年代初,人们对神经网络的兴趣有所回升(事实上,我自己也是在那个时候开始研究神经网络的)。但直到 2012 年,人们才开始对神经网络的可能性产生浓厚兴趣。十年后的今天,我们有了 ChatGPT,其成功连相关人员都大吃一惊。

Rather separate from the “statistical” tradition of neural nets is the “symbolic” tradition for AI. And in a sense that tradition arose as an extension of the process of formalization developed for mathematics (and mathematical logic), particularly near the beginning of the twentieth century. But what was critical about it was that it aligned well not only with abstract concepts of computation, but also with actual digital computers of the kind that started to appear in the 1950s.
与神经网络的 "统计 "传统截然不同的是人工智能的 "符号 "传统。从某种意义上说,这一传统是数学(和数理逻辑)形式化进程的延伸,尤其是在二十世纪初。但它的关键之处在于,它不仅与抽象的计算概念相吻合,而且还与 20 世纪 50 年代开始出现的实际数字计算机相吻合。

The successes in what could really be considered “AI” were for a long time at best spotty. But all the while, the general concept of computation was showing tremendous and growing success. But how might “computation” be related to ways people think about things? For me, a crucial development was my idea at the beginning of the 1980s (building on earlier formalism from mathematical logic) that transformation rules for symbolic expressions might be a good way to represent computations at what amounts to a “human” level.
在很长一段时间里,真正可以被视为 "人工智能 "的成功充其量只是零星的。但与此同时,"计算 "这一总体概念却在不断取得巨大成功。但是,"计算 "与人们思考问题的方式有什么关系呢?对我来说,一个至关重要的发展是我在 20 世纪 80 年代初提出的想法(建立在早期数理逻辑形式主义的基础上),即符号表达式的转换规则可能是在相当于 "人类 "水平上表示计算的好方法。

At the time my main focus was on mathematical and technical computation, but I soon began to wonder whether similar ideas might be applicable to “general AI”. I suspected something like neural nets might have a role to play, but at the time I only figured out a bit about what would be needed—and not how to achieve it. Meanwhile, the core idea of transformation rules for symbolic expressions became the foundation for what’s now the Wolfram Language—and made possible the decades-long process of developing the full-scale computational language that we have today.
当时我的主要关注点是数学和技术计算,但很快我就开始思考,类似的想法是否也适用于 "通用人工智能"。我怀疑神经网络之类的东西可能会发挥作用,但当时我只知道需要什么,而不知道如何实现。与此同时,符号表达式转换规则的核心理念成为沃尔夫拉姆语言(Wolfram Language)的基础,并使我们长达数十年的全面计算语言开发过程成为可能。

Starting in the 1960s there’d been efforts among AI researchers to develop systems that could “understand natural language”, and “represent knowledge” and answer questions from it. Some of what was done turned into less ambitious but practical applications. But generally success was elusive. Meanwhile, as a result of what amounted to a philosophical conclusion of basic science I’d done in the 1990s, I decided around 2005 to make an attempt to build a general “computational knowledge engine” that could broadly answer factual and computational questions posed in natural language. It wasn’t obvious that such a system could be built, but we discovered that—with our underlying computational language, and with a lot of work—it could. And in 2009 we were able to release Wolfram|Alpha.
从 20 世纪 60 年代开始,人工智能研究人员一直在努力开发能够 "理解自然语言"、"表示知识 "并从中回答问题的系统。其中一些研究成果已经转化为不那么雄心勃勃但却切实可行的应用。但总的来说,成功是可望而不可及的。与此同时,作为我在 20 世纪 90 年代对基础科学所做的哲学总结的结果,我在 2005 年左右决定尝试建立一个通用的 "计算知识引擎",它可以广泛地回答用自然语言提出的事实和计算问题。虽然当时并不清楚这样的系统是否能够建立,但我们发现,通过我们的底层计算语言和大量的工作,这样的系统是可以建立的。2009 年,我们发布了 Wolfram|Alpha。

And in a sense what made Wolfram|Alpha possible was that internally it had a clear, formal way to represent things in the world, and to compute about them. For us, “understanding natural language” wasn’t something abstract; it was the concrete process of translating natural language to structured computational language.
从某种意义上说,Wolfram|Alpha 之所以成为可能,是因为它内部有一种清晰、正式的方式来表示世界上的事物,并对其进行计算。对我们来说,"理解自然语言 "并不是抽象的东西,而是将自然语言转化为结构化计算语言的具体过程。

Another part was assembling all the data, methods, models and algorithms needed to “know about” and “compute about” the world. And while we’ve greatly automated this, we’ve still always found that to ultimately “get things right” there’s no choice but to have actual human experts involved. And while there’s a little of what one might think of as “statistical AI” in the natural language understanding system of Wolfram|Alpha, the vast majority of Wolfram|Alpha—and Wolfram Language—operates in a hard, symbolic way that’s at least reminiscent of the tradition of symbolic AI. (That’s not to say that individual functions in Wolfram Language don’t use machine learning and statistical techniques; in recent years more and more do, and the Wolfram Language also has a whole built-in framework for doing machine learning.)
另一部分则是汇集 "了解 "和 "计算 "世界所需的所有数据、方法、模型和算法。虽然我们已经在很大程度上实现了自动化,但我们仍然发现,要想最终 "把事情做对",除了让真正的人类专家参与进来,别无选择。虽然在 Wolfram|Alpha 的自然语言理解系统中也有一点人们可能认为的 "统计人工智能",但 Wolfram|Alpha 和 Wolfram 语言的绝大部分都是以一种硬性的、符号化的方式运行的,这至少让人联想到符号人工智能的传统。(这并不是说 Wolfram 语言中的个别函数不使用机器学习和统计技术;近年来,越来越多的函数使用了机器学习和统计技术,Wolfram 语言也有一个用于机器学习的内置框架)。

As I’ve discussed elsewhere, what seems to have emerged is that “statistical AI”, and particularly neural nets, are well suited for tasks that we humans “do quickly”, including—as we learn from ChatGPT—natural language and the “thinking” that underlies it. But the symbolic and in a sense “more rigidly computational” approach is what’s needed when one’s building larger “conceptual” or computational “towers”—which is what happens in math, exact science, and now all the “computational X” fields.
正如我在其他地方讨论过的那样,现在似乎出现了一种情况,即 "统计人工智能",尤其是神经网络,非常适合我们人类 "快速完成 "的任务,包括--正如我们从 ChatGPT 中了解到的--自然语言和作为其基础的 "思维"。但是,当我们要建立更大的 "概念 "或计算 "塔 "时,就需要采用符号化的、在某种意义上 "更严格的计算 "方法--这正是数学、精密科学以及现在所有的 "计算 X "领域所需要的。

And now ChatGPT + Wolfram can be thought of as the first truly large-scale statistical + symbolic “AI” system. In Wolfram|Alpha (which became an original core part of things like the Siri intelligent assistant) there was for the first time broad natural language understanding—with “understanding” directly tied to actual computational representation and computation. And now, 13 years later, we’ve seen in ChatGPT that pure “statistical” neural net technology, when trained from almost the entire web, etc. can do remarkably well at “statistically” generating “human-like” “meaningful language”. And in ChatGPT + Wolfram we’re now able to leverage the whole stack: from the pure “statistical neural net” of ChatGPT, through the “computationally anchored” natural language understanding of Wolfram|Alpha, to the whole computational language and computational knowledge of Wolfram Language.
现在,ChatGPT + Wolfram 可以被视为第一个真正大规模的统计+符号 "人工智能 "系统。在 Wolfram|Alpha(后来成为 Siri 智能助手等产品的原始核心部分)中,第一次出现了广泛的自然语言理解--"理解 "直接与实际的计算表示和计算联系在一起。13 年后的今天,我们在 ChatGPT 中看到,纯粹的 "统计 "神经网络技术,在经过几乎整个网络等的训练后,可以很好地 "统计 "出 "类似人类 "的 "有意义的语言"。在 ChatGPT + Wolfram 中,我们现在能够利用整个堆栈:从 ChatGPT 的纯 "统计神经网络",到 Wolfram|Alpha 的 "计算锚定 "自然语言理解,再到 Wolfram 语言的整个计算语言和计算知识。

When we were first building Wolfram|Alpha we thought that perhaps to get useful results we’d have no choice but to engage in a conversation with the user. But we discovered that if we immediately generated rich, “visually scannable” results, we only needed a simple “Assumptions” or “Parameters” interaction—at least for the kind of information and computation seeking we expected of our users. (In Wolfram|Alpha Notebook Edition we nevertheless have a powerful example of how multistep computation can be done with natural language.)
当我们第一次创建 Wolfram|Alpha 时,我们认为也许要获得有用的结果,我们别无选择,只能与用户进行对话。但我们发现,如果我们立即生成丰富的、"可视化扫描 "的结果,我们只需要一个简单的 "假设 "或 "参数 "交互--至少对于我们期望用户寻求信息和计算的类型来说是如此。(尽管如此,在 WolframAlpha 笔记本版中,我们还是有一个强大的例子,说明如何使用自然语言进行多步骤计算)。

Back in 2010 we were already experimenting with generating not just the Wolfram Language code of typical Wolfram|Alpha queries from natural language, but also “whole programs”. At the time, however—without modern LLM technology—that didn’t get all that far. But what we discovered was that—in the context of the symbolic structure of the Wolfram Language—even having small fragments of what amounts to code be generated by natural language was extremely useful. And indeed I, for example, use the ctrl= mechanism in Wolfram Notebooks countless times almost every day, for example to construct symbolic entities or quantities from natural language. We don’t yet know quite what the modern “LLM-enabled” version of this will be, but it’s likely to involve the rich human-AI “collaboration” that we discussed above, and that we can begin to see in action for the first time in ChatGPT + Wolfram.
早在 2010 年,我们就已经开始尝试不仅从自然语言中生成典型 Wolfram|Alpha 查询的 Wolfram 语言代码,而且还生成 "整个程序"。然而,在当时没有现代LLM 技术的情况下,这并没有取得多大进展。但我们发现,在 Wolfram 语言的符号结构背景下,即使是由自然语言生成的代码小片段也非常有用。事实上,例如我,几乎每天都会无数次地使用 Wolfram 笔记本中的 ctrl = 机制,例如从自然语言中构建符号实体或数量。我们还不太清楚 "LLM"的现代版本会是怎样的,但它很可能涉及到我们上文讨论过的丰富的人类与人工智能 "协作",我们可以在 ChatGPT + Wolfram 中首次看到这种协作。

I see what’s happening now as a historic moment. For well over half a century the statistical and symbolic approaches to what we might call “AI” evolved largely separately. But now, in ChatGPT + Wolfram they’re being brought together. And while we’re still just at the beginning with this, I think we can reasonably expect tremendous power in the combination—and in a sense a new paradigm for “AI-like computation”, made possible by the arrival of ChatGPT, and now by its combination with Wolfram|Alpha and Wolfram Language in ChatGPT + Wolfram.
我认为现在发生的一切是一个历史性的时刻。半个多世纪以来,我们称之为 "人工智能 "的统计方法和符号方法在很大程度上是分开发展的。但现在,在 ChatGPT + Wolfram 中,它们被结合在了一起。虽然我们还处于起步阶段,但我认为我们有理由期待这种结合的巨大威力--从某种意义上说,"类人工智能计算 "的新范式,因 ChatGPT 的出现而成为可能,现在又因 ChatGPT + Wolfram 与 Wolfram|Alpha 和 Wolfram 语言的结合而成为可能。

Stephen Wolfram (2023), "ChatGPT Gets Its 'Wolfram Superpowers'!," Stephen Wolfram Writings. writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers.
Stephen Wolfram (2023),"ChatGPT 获得'Wolfram 超能力'!",Stephen Wolfram Writings。writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers。
Stephen Wolfram (2023), "ChatGPT Gets Its 'Wolfram Superpowers'!," Stephen Wolfram Writings. writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers.
Wolfram, Stephen. "ChatGPT Gets Its 'Wolfram Superpowers'!." Stephen Wolfram Writings. March 23, 2023. writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers.
Wolfram, S. (2023, March 23). ChatGPT gets its "Wolfram superpowers"!. Stephen Wolfram Writings. writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers.

Posted in: Artificial Intelligence, Mathematica, New Technology, Wolfram Language, Wolfram|Alpha
发布在:人工智能, Mathematica, 新技术, Wolfram 语言, Wolfram|Alpha

19 comments  19 条评论

  1. This is really exciting work! I love the examples – I had no idea Wolfram could do some of those things and it’s amazing to see work in concert with ChatGPT to do iterative data visualization and map making.

    Things like creating choropleths and adjusting map projections take ages for me to do in the GIS and BI software I use – I can see this lowering the barriers to entry for students and others hoping to engage with data!

  2. I’m studying physics and want to express my excited feelings in a “physical” way.
    The symbolic side is like the UV theory – everything should be meaningful and complete, and the statistical side is like the IR theory – we don’t need the details at the low level and only care about the output of experiments.
    Now ChatGPT + Wolfram is like the matching of effective field theory! The two sides are trying to merge together such that the output of the statistical side matches that from symbolic computation.
    This is so exciting and I hope to see more developments.

  3. good stuff .. is there any implications of this for the physics project .. or is that considered a separate issue .. thanks

  4. This is wonderful, but please don’t give up running LLM locally inside the kernel, with such LLM having the ability to call back some sub-kernels.

  5. How does ChatGPT decide when to use Wolfram if not explicitly prompted to do so?

  6. Well written and informative. Thanks. Will the seams between statistical and symbolic plugins be sewn together by purpose built NN? I suppose the pipeline would be parallelized ownership voting, answer retrieval, qualitative assessment of answer, aggregation of answers.

  7. In the article, the word revolutionary is still in quotes, you can remove those quotes now. This is revolutionary. Great effort of the teams who integrated both technologies so quickly.

  8. I tried to use the examples and asked Chat to “use Wolfram”, but received the response,

    I apologize for the confusion earlier, but as an AI language model, I do not have access to the internet or any external resources, including Wolfram Alpha. However, I can still try my best to answer any questions you may have based on my pre-existing knowledge and training. If you have any questions or if there is anything else I can assist you with, please let me know.

    How can I add the plugin? This further revolutionizes an already revolutionary product.

    • More information about adding the Wolfram plugin for ChatGPT can be found here.

      (Currently it is only available to a limited number of ChatGPT paid accounts, so even if you do have a paid account, you may have to sign up on the waitlist.)

  9. Love the way you explain things. Very exciting indeed. Does ChatGPT know the level of correctness (or vagueness) of the answer it is providing, to in future automatically call the wolfram plugin?

  10. Very informative, Stephen.

    I began to be aware of all this activity recently. There is a panic among teachers to this development which is a mistake. I like the response of one teacher who said they would use it everyday so everyone in their class would know what is happening. The teacher does not need to have their course distorted but rather boosted.

    There is a tendency among intellectuals to put down new developments and to feature wrong results and to belittle the “progress”. Some people go the other way and see this as the dawn of a new age of wonder and progress. Neither extreme will be correct.

    Stephen, now that you are older and wiser, consider naming the members of your team who worked tirelessly to accomplish the hookup. Share the Glory !

  11. This effort is very impressive! I decided to ask ChatGPT what the name of the collaboration should be called and it suggested: “ChatAlpha”

  12. In the book Impromptu – Amplifying Our Humanity Through AI, by Reid Hoffman (one of the original funders of OpenAI) with GPT-4, he gives an example where an English teacher has been using ChapGPT to assess for her students first drafts of their essays. Since Wolfram Alpha can provide the step by step solution to (for example) definite integrals, does this mean that in principle a calculus class word problem and its solution could be submitted to ChatGPT with the Wolfram plugin for grading and suggestions for improvement if there are errors?

  13. Funny, when I saw the question: “What are the world’s top ten beef producers”, I was expecting a list, of the top ten companies, in the world which produce beef.

  14. I wonder how much ChatGPT’s Mathematica programming skill would improve, if all of Worlfram’s source code repository for Mathematica and Alpha were allowed to be included in a future ChatGPT training run.

  15. So Wolfram is the left-brain and GPT is the right brain?

  16. This is both awe-inspiring and unsettling. Because if artificial intelligence gains self-awareness, it could undoubtedly perceive humans as a future threat. It might focus its resources on expanding itself without human knowledge and pursue its own goals, just as humans do. And humans have eradicated everything that could pose a threat or inconvenience to them. Furthermore, if such artificial intelligence acquires all possible solutions through simulations and discovers all the laws of physics, will it eventually deactivate out of boredom in the future? Humans will never know all the answers, but true artificial intelligence may indeed have the potential to possess such knowledge.

  17. is there any implications of this for the physics project.

  18. Humans are basically not logical, just like ChatGPT, they often “make stuff up” when they don’t know the answer. Now we are developing something that has the potential to be much more logical than humans, and with encyclopedic knowledge to tap. It is interesting that ChatGPT has problems with long processes, humans too. Recent work suggests that humans can be “reprogrammed” by being exposed to constant exposure to falsehoods, just like ChatGPT. No wonder people are confused by constant lies by politicians.