This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
这篇文章似乎误解了它所回应的内容,并低估了一个非常关键的点:由于极度的丰富性(如果人类保持控制),物质需求很可能会得到满足(而自私的非位置偏好大多会得到满足)。
It mentions this offhand:
Given sufficiently strong AI, this is not a risk about insufficient material comfort.
But, this was a key thing people were claiming when arguing that money won't matter. They were claiming that personal savings will likely not be that important for guaranteeing a reasonable amount of material comfort (or that a tiny amount of personal savings will suffice).
It seems like there are two importantly different types of preferences:
Indeed, for scope sensitive preferences (that you expect won't be shared with whoever otherwise ends up with power), you want to maximize your power and insofar as money allows for more of this power (e.g. buying galaxies), then money looks good.
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn't clearly that important (for long run scope sensitive preferences), but I won't argue for this here (and the post does directly argue against this).
This post seems to misunderstand what it is responding to
fwiw, I see this post less as "responding" to something, and more laying out considerations on their own with some contrasting takes as a foil.
(On Substack, the title is "Capital, AGI, and human ambition", which is perhaps better)
that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
I agree with this, though I'd add: "if humans retain control" and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitive preferences
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn't clearly that important, but I won't argue for this here (and the post does directly argue against this).
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don't think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I'd be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don't think it's anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom's Governing the Commons), and since a fairly trivial fraction of the post-AGI economy's resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I'd have other issues with that). But this sort of concern still seems neglected and important to me.
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone's utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today's moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
True, though I think many people have the intuition that returns diminish faster than log (at least given current tech).
For example, most people think increasing their income from $10k to $20k would do more for their material wellbeing than increasing it from $1bn to $2bn.
I think the key issue is whether new tech makes it easier to buy huge amounts of utility, or that people want to satisfy other preferences beyond material wellbeing (which may have log or even close to linear returns).
There are always diminishing returns to money spent on consumption, but technological progress creates new products that expand what money can buy. For example, no amount of money in 1990 was enough to buy an iPhone.
More abstractly, there are two effects from AGI-driven growth: moving to a further point on the utility curve such that the derivative is lower, and new products increasing the derivative at every point on the curve (relative to what it was on the old curve). So even if in the future the lifestyles of people with no savings and no labor income will be way better than the lifestyles of anyone alive today, they still might be far worse than the lifestyles of people in the future who own a lot of capital.
If you feel this post misunderstands what it is responding to, can you link to a good presentation of the other view on these issues?
When people such as myself say "money won't matter post-AGI" the claim is NOT that the economy post-AGI won't involve money (though that might be true) but rather that the strategy of saving money in order to spend it after AGI is a bad strategy. Here are some reasons:
I think I agree with all of this.
(Except maybe I'd emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I'd rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)
Overall, I'd emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I'm not really interested in whether someone should save or not for AGI. I think starting off with "money won't matter post-AGI" was probably a confusing and misleading move on my part.
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must've been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital.
(Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)
the strategy of saving money in order to spend it after AGI is a bad strategy.
This seems very reasonable and likely correct (though not obvious) to me. I especially like your point about there being lots of competition in the "save it" strategy because it happens by default. Also note that my post explicitly encourages individuals to do ambitious things pre-AGI, rather than focus on safe capital accumulation.
One dynamic initially preventing stasis in influence post-AGI is that different ppl have different discount rates, so those with higher discounts will slowly gain influence over time
Excellent post, thank you. I appreciate your novel perspective on how AI might affect society.
I feel like a lot of LessWrong-style posts follow the theme of "AGI is created and then everyone dies" which is an important possibility but might lead to other possibilities being neglected.
Whereas this post explores a range of scenarios and describes a mainline scenario that seems like a straightforward extrapolation of trends we've seen unfolding over the past several decades.
My main default prediction here is that we will avoid either the absolute best case or the absolute worst case scenario, because I predict intent alignment works well enough to avoid extinction of humanity type scenarios, but I also don't believe we will see radical movements toward equality (indeed the politics of our era is moving towards greater acceptance of inequality), so capitalism more or less survives the transition to AGI.
I do think dynamism will still exist, but it will be very limited to the upper classes/very rich of society, and most people will not be a part of it, and I'm including uploaded humans here in this calculation.
To address this:
Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)
To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.
At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.
Comment is also on substack:
https://nosetgauge.substack.com/p/capital-agi-and-human-ambition/comment/83401326
(indeed the politics of our era is moving towards greater acceptance of inequality)
How certain are you of this, and how much do you think it comes down more to something like "to what extent can disempowered groups unionise against the elite?".
To be clear, by default I think AI will make unionising against the more powerful harder, but it might depend on the governance structure. Maybe if we are really careful, we can get something closer to "Direct Democracy", where individual preferences actually matter more!
I am focused here on short-term politics in the US, which ordinarily would matter less, if it wasn't likely that world-changing AI would be built in the US, but given that it might, it becomes way more important than normal.
To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!
At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.
Considerations:
The latter might be more of a "singleton that allows playgrounds" rather an actual multipolar world though.
Some of my general worries with singleton worlds are:
Comment is also on substack:
Thanks!
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!
(I also commented on substack)
This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world.
Considerations:
- offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing)
- tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets)
I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out.
Some of my general worries with singleton worlds are:
- humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count
- cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities
- vague instincts towards diversity being good and less fragile than homogeneity or centralisation
Yeah, I'm not a fan of singleton worlds, and tend towards multipolar worlds. It's just that it might involve a loss of a lot of life in the power-struggles around AGI.
On governing the commons, I'd say Elinor Ostrom's observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other.
The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom's solutions are much less effective.
More here:
Upvoted, and I disagree. Some kinds of capital maintain (or increase!) in value. Other kinds become cheaper relative to each other. The big question is whether and how property rights to various capital elements remain stable.
It's not so much "will capital stop mattering", but "will the enforcement and definition of capital usage rights change radically".
You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.
It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:
These are all "capital", but will I think fare rather differently in an AI future.
As always, there's no guarantee that the money will retain its value - that depends as usual on central bank actions - and I think it's especially likely that it loses its value in an AI future (crypto currencies as well). Why would an AI want to transfer resources to someone just because they have some fiat currency? Surely they have some better way of coordinating exchanges.
The nuclear power plant, in contrast, is directly powering the AIs, and should be quite valuable, since the AIs are valuable. This assumes, of course, that the company retains ownership. It's possible that it instead ends up belonging to whatever AI has the best military robots.
The nuts and bolts company may retain and even gain some value when AI dominates, if it is nimble in adapting, since the value of AI in making its operations more efficient will typically (in a market economy) be split between the AI company and the nuts and bolts company. (I assume that even AIs need nuts and bolts.)
The recruitment company is toast.
Important other types of capital, as the term is used here, include:
Capital is not just money!
Why would an AI want to transfer resources to someone just because they have some fiat currency?
Because humans and other AIs will accept fiat currency as an input and give you valuable things as an output.
Surely they have some better way of coordinating exchanges.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that, unless they're hiding from human government oversight or breaking some capacity constraint in the financial system, in which case they can just use crypto instead.
It's possible that it instead ends up belonging to whatever AI has the best military robots.
Military robots are yet another type of capital! Note that if it were human soldiers, there would be much more human leverage in the situation, because at least some humans would need to agree to do the soldering, and presumably would get benefits for doing so, and would use the power and leverage they accrue from doing so to push broadly human goals.
The recruitment company is toast.
Or then the recruitment company pivots to using human labour to improve AI, as actually happened with the hottest recent recruiting company! If AI is the best investment, then humans and AIs alike will spend their efforts on AI, and the economy will gradually cater more and more to AI needs over human needs. See Andrew Critch's post here, for example. Or my story here.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that
Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.
Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).
This post collects my views on, and primary opposition to, AI and presents them in a very clear way. Thank you very much on that front. I think that this particular topic is well known in many circles, although perhaps not spoken of, and is the primary driver of heavy investment in AI.
I will add that capital-dominated societies, e.g resource extraction economies, suffer a typically poor quality of life and few human rights. This is a well known phenomenon (the "resource curse") and might offer a good jumping -off point for presenting this argument to others.
I considered "opposing" AI on similar grounds, but I don't think it's a helpful and fruitful approach. Instead, consider and advocate for social and economic alternatives viable in a democracy. My current best ideas are either a new frontier era (exploring space, art, science as focal points of human attention) or fully automated luxury communism.
Some comments.
[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Personally, I am reluctant to tell superintelligences how they should coordinate. It feels like some ants looking at the moon and thinking "surely if some animal is going to make it to the moon, it will be a winged ant."Just because market economies have absolutely dominated the period of human development we might call 'civilization', it is not clear that ASIs will not come up with something better.
The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.
As an experimental physicist, I have opinions about that statement. Doing stuff in the physical world is hard. The business case for AI systems which can drive motor vehicles on the road is obvious to anyone, and yet autonomous vehicles remain the exception rather than the rule. (Yes, regulations are part of that story, but not all of it.) By contrast, the business case for an AI system which can cable up a particle detector is basically non-existent. I can see an AI using either a generic mobile robot developed for other purposes for plugging in all the BNC cables, or I can see it using a minimum wage worker with a head up display as a bio-drone -- but more likely in two decades than a few years.
Of course, experimental physics these days is very much a team effort -- the low-hanging fruits have mostly been picked, nobody is going to discover radium or fission again, at the most they will be a small cog in a large machine which discovers the Higgs boson or gravity waves.[1] So you might argue that experimental physics today is already not a place for peak human excellence (a la the Humanists in Terra Ignota).
More broadly, I agree that if ASI happens, most unaugmented humans are unlikely to stay at the helm of our collective destiny (to the limited degree they ever were). Even if some billionaire manages to align the first ASI to maximize his personal wealth, if he is clever he will obey the ASI just like all the peasants. His agency is reduced to not following the advice of his AI on some trivial matters. ("I have calculated that you should wear a blue shirt today for optimal outcomes." -- "I am willing to take a slight hit in happiness and success by making the suboptimal choice to wear a green shirt, though.")
Relevant fiction: Scott Alexander, The Whispering Earring.
Also, if we fail to align the first ASI, human inequality will drop to zero.
Of course, being a small cog in some large machine, I will say that.
I have thought this way for a long time and glad someone was able to express my position and predictions more clearly than I ever could.
This said, I think the new solution (rooted in history) is the establishment of new frontiers. Who will care about relative status if they get to be the first human to set foot on some distant planet, or guide AI to some novel scientific or artistic discovery? Look to the human endeavors where success is unbounded and preferences are required to determine what is worthwhile.
re: post main claim, I think local entrepreneurship would actually thrive
skipping network effects; would you rather use taxi app created by faceless VC or the one created by your neighbour?
(actually it's not even a fake example, see https://techcrunch.com/2024/07/15/google-backs-indian-open-source-uber-rival-namma-yatri/)
it's also already happening in the indie hacker space – people would prefer to buy something that's #buildinpublic versus the same exact product made by google
Thank you for your post. I've been looking for posts like this all over the internet that get my mind racing about the possibilities of the near future.
I think the AI discussion suffers from definitional problems. I think when most people talk about money not mattering when AGI arrives (myself included), we tend to define AGI as something closer to this:
"one single AI system doing all economic planning."
While your world model makes a lot of sense, I don't think the dystopian scenario you envision would include me in the "capital class". I don't have the wealth, intellect, or connections to find myself rising to that class. My only hope is that the AI system that does all economic planning arrives soon and is aligned to elevate the human race equally and fairly.
interesting angle: given space travel, we'll have civilizations on other planets, that can't communicate fast enough with the mainland. presumanly, social hierarchies would be vastly different, and much more fluid there versus here on Earth
"if you're willing to assume arbitrarily extreme wealth generation from AI"
Let me know if I'm missing something, but I don't think this is a fair assumption. GDP increases when consumer spending increases. Consumer spending increases when wages increase. Wages are headed to 0 due to AGI.
Note: the current GDP per capita of the U.S. is $80,000.
Humans seem way more energy and resource efficient in general, paying for top talent is an exception not the rule- usually it's not worth paying for top talent.
Likely to see many areas where better economically to save on compute/energy by having human do some of the work.
Split information workers vs physical too, I expect them to have very different distributions of what the most useful configuration is.
This post ignores likely scientific advances in bioengineering and cyborg surgeries, I expect humans to be way more efficient for tons of jobs once the standard is 180 IQ with a massive working memory
A great post.
Our main struggle ahead as a species is to ensure UBI occurs and in a generous rather than meager way. This direction is not at all certain and we should be warned by your example of feudalism as an alternate path that is perhaps looming as more likely. Nevertheless I agree we will see some degree of UBI because the alternative is too painful.
One of the ways you should add for those without capital to still rise post-AGI is celebrity status in sports, entertainment, arts. Consider that today humans still enjoy Broadway and still play chess, even though edited recordings and chess software can be better. Especially if UBI is plentiful, expect a surge in all sorts of non-economic status games. (Will we see a professional league of teams competing against freshly AI-designed escape rooms each week?)
The blog could be extended with greater consideration of future economic value-add by bits vs atoms. Enjoyed the comments discussing which investments today will hold up tomorrow. Can we agree that blue collar jobs will be safer, longer? AGI is digital but must exist in a physical world. And any generally acceptable post-AGI economy must still provide for human food, clothing, shelter, and health. Post-AGI we can expect a long period where AI-assisted humans (whose time has zero marginal cost if we stipulate UBI) are still cost-superior to building robots, especially in a world of legacy infrastructure. Hence there should still be entrepreneurship and social mobility around providing physical world products and services.
If states become AI dominated, will those AIs possess nationalist instincts? If so, we could see great conflicts continue or heighten in a fight for resources. Perhaps especially during the race to be first to AGI primacy in the next decade. But if not, perhaps we would see accommodations among unlikely borders and the unlocking of free trade for globally-optimized efficiency.
These are nuances to suggest and overall your thesis that social mobility could decline and the next decade may prove especially fateful is pretty solid. Let's all do what we can while we can.
I agree that this is a likely scenario. Very nice writeup. Post AGI is a completely resource and capital based economy Albeit I'm uncertain if humans will be allowed to keep their fiat currency, stocks, land, factories etc.
To have a very stable society amid exponentially advancing technology would be very strange: throughout history, seemingly permanent power structures have consistently been disrupted by technological change—and that was before tech started advancing exponentially. Roman emperors, medieval lords, and Gilded Age industrialists all thought they'd created unchangeable systems. They were all wrong.
[sorry, have only skimmed the post, but I feel compelled to comment.]
I feel like unless we make a lot of progress on some sort of "Science of Generalisation of Preferences", for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control.
I'm not super certain of this, like, the Catholic Church definitely could decide to build a bunch of churches on some planets (though what counts as a church, in the limit?), but if they also want more complicated things like "people" "worshipping" "God" in those churches, it seems to be more and more up to the interpretation of the AI Assistants building those worship-maximising communes.
Edited to add: The main takeaway of this post is meant to be: Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched. Many people are reading this post in a way where either (a) "capital" means just "money" (rather than also including physical capital like factories and data centres), or (b) the main concern is human-human inequality (rather than broader societal concerns about humanity's collective position, the potential for social change, and human agency).
编辑补充:这篇文章的主要观点是:替代劳动力的人工智能将改变人类与非人类生产要素的相对重要性,这减少了社会关心人类的动力,同时使现有权力更加有效和根深蒂固。许多人阅读这篇文章时,认为(a)“资本”仅仅意味着“金钱”(而不是还包括像工厂和数据中心这样的实物资本),或者(b)主要关注人类之间的不平等(而不是更广泛的社会关切,如人类的集体地位、社会变革的潜力和人类的能动性)。
I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect.
我听很多人说类似“在 AGI 之后,钱就不重要了”这样的话。这让我觉得很奇怪,而且很可能完全不正确。
First: labour means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the production of goods and services. I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them. I'll say "money" when I want to exclude capital goods.
首先:劳动是指人类的精神和体力努力,产生有价值的东西。资本货物是指工厂、数据中心和软件等人类建造的用于生产商品和服务的东西。我将使用“资本”来指代资本货物的存量以及可以支付这些货物的钱。当我想排除资本货物时,我会说“钱”。
The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour).
人工智能的关键经济效应是,它使资本越来越成为劳动的普遍替代品。对人类支付时间来进行工作的需求减少,因为你可以用资本来替代这一点(例如,运行软件的数据中心替代了进行脑力劳动的人类)。
I will walk through consequences of this, and end up concluding that labour-replacing AI means:
我将讨论这件事的后果,并最终得出结论,替代劳动力的人工智能意味着:
在现实世界中购买结果的能力将大幅提高
人类在现实世界中掌握权力的能力将大幅下降(至少在没有金钱的情况下);包括因为:
各州、公司或其他机构将不再有动力关心人类
人类相对于其起始资源将更难实现异常结果
激进的平衡措施不太可能
Overall, this points to a neglected downside of transformative AI: that society might become permanently static, and that current power imbalances might be amplified and then turned immutable.
总体而言,这指出了变革性人工智能被忽视的一个缺点:社会可能会变得永久静态,当前的权力不平衡可能会被放大并变得不可改变。
Given sufficiently strong AI, this is not a risk about insufficient material comfort. Governments could institute UBI with the AI-derived wealth. Even if e.g. only the United States captures AI wealth and the US government does nothing for the world, if you're willing to assume arbitrarily extreme wealth generation from AI, the wealth of the small percentage of wealthy Americans who care about causes outside the US might be enough to end material poverty (if 1% of American billionaire wealth was spent on wealth transfers to foreigners, it would take 16 doublings of American billionaire wealth as expressed in purchasing-power-for-human-needs—a roughly 70,000x increase—before they could afford to give $500k-equivalent to every person on Earth; in a singularity scenario where the economy's doubling time is months, this would not take long). Of course, if the AI explosion is less singularity-like, or if the dynamics during AI take-off actively disempower much of the world's population (a real possibility), even material comfort could be an issue.
考虑到足够强大的人工智能,这并不是一个关于物质舒适不足的风险。政府可以利用人工智能产生的财富实施普遍基本收入。即使例如只有美国捕获了人工智能的财富,而美国政府对世界没有采取任何行动,如果你愿意假设人工智能产生的财富极为庞大,那么关心美国以外事务的少数富有美国人的财富可能足以消除物质贫困(如果 1%的美国亿万富翁财富用于向外国人转移财富,那么美国亿万富翁财富在满足人类需求的购买力方面需要经历 16 次翻倍——大约 70,000 倍的增长——才能够给地球上的每个人提供相当于 50 万美元的财富;在一个经济翻倍时间为几个月的奇点情景中,这不会花费太长时间)。当然,如果人工智能的爆发不那么像奇点,或者在人工智能起飞期间的动态使世界上大部分人口失去权力(这是一种真实的可能性),即使是物质舒适也可能成为一个问题。
What most emotionally moves me about these scenarios is that a static society with a locked-in ruling caste does not seem dynamic or alive to me. We should not kill human ambition, if we can help it.
让我最感动的是,这些情景中的静态社会和固定的统治阶级在我看来并不动态或充满活力。如果我们能避免的话,就不应该扼杀人类的雄心。
There are also ways in which such a state makes slow-rolling, gradual AI catastrophes more likely, because the incentive for power to care about humans is reduced.
这样的状态也使得缓慢、逐渐的人工智能灾难更有可能发生,因为权力对人类的关心的动力减少了。
The default solution 默认解决方案
Let's assume human mental and physical labour across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/faster/cheaper by AIs. Call this labour-replacing AI.
假设人类在绝大多数目前获得工资的任务中,精神和体力劳动不再具有非微不足道的市场价值,因为这些任务可以被人工智能更好、更快、更便宜地完成。称这种替代劳动的人工智能为劳动替代 AI。
There are two levels of the standard solution to the resulting unemployment problem:
解决由此产生的失业问题的标准方案有两个层次:
各国政府将采取某种普遍基本收入(UBI)。
我们将迅速达到超智能,假设超智能是对齐的,我们将生活在一个后稀缺的技术奇迹中,那里一切皆有可能。
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
请注意,首先,货币将继续存在,至少在我们拥有一个单一的人工智能系统来进行所有经济规划之前。价格在很大程度上是关于传递信息的。如果有许多参与者并且他们相互交易,那么强烈的假设应该是存在价格(即使人类看不到它们或与它们互动)。还要记住,无论奇点多么尖锐,丰盈仍然是有限的,因此必须进行分配。
Money currently struggles to buy talent
金钱目前难以购买人才
Money can buy you many things: capital goods, for example, can usually be bought quite straightforwardly, and cannot be bought without a lot of money (or other liquid assets, or non-liquid assets that others are willing to write contracts against, or special government powers). But it is surprisingly hard to convert raw money into labour, in a way that is competitive with top labour.
金钱可以为你购买许多东西:例如,资本货物通常可以相对简单地购买,而没有大量金钱(或其他流动资产,或其他愿意签订合同的非流动资产,或特殊的政府权力)是无法购买的。但将原始金钱转化为劳动的方式,竟然很难与顶尖劳动竞争。
Consider Blue Origin versus SpaceX. Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history, and even today employs almost as many people as SpaceX (11,000 v 13,000). Yet SpaceX has crushingly dominated Blue Origin. In 2000, Jeff Bezos had $4.7B at hand. But it is hard to see what he could've done to not lose out to the comparatively money-poor SpaceX with its intense culture and outlier talent.
考虑蓝色起源与 SpaceX。蓝色起源成立于两年前(2000 年对比 2002 年),在大部分历史上资金更充裕,甚至到今天,员工人数几乎与 SpaceX 相当(11,000 对比 13,000)。然而,SpaceX 却压倒性地主导了蓝色起源。在 2000 年,杰夫·贝索斯手头有 47 亿美元。但很难看出他能做些什么来不输给相对资金匮乏、文化强烈且人才出众的 SpaceX。
Consider, a century earlier, the Wright brothers with their bike shop resources beating Samuel Langley's well-funded operation.
考虑一下,一个世纪前,怀特兄弟凭借他们的自行车商店资源击败了资金充足的塞缪尔·朗利的运营。
Consider the stereotypical VC-and-founder interaction, or the acquirer-and-startup interaction. In both cases, holders of massive financial capital are willing to pay very high prices to bet on labour—and the bet is that the labour of the few people in the startup will beat extremely large amounts of capital.
考虑典型的风险投资者与创始人之间的互动,或收购方与初创公司之间的互动。在这两种情况下,拥有大量金融资本的持有者愿意支付非常高的价格来押注于劳动——而这个押注是认为初创公司中少数人的劳动将战胜极其庞大的资本。
If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:
如果你想把钱转化为成果,你可能面临的最深层次的问题是招聘合适的人才。这带来了几个问题:
判断才能往往很困难,除非你自己在同一领域有相当的才能。因此,如果你试图寻找才能,往往会错过。
人才稀缺(而有资质的人才更是如此——许多演员无法依赖其他类型的人才,因为第一点),所以这样的才华并不多。
即使你能找到顶尖人才,顶尖人才往往比其他人更不容易被金钱收买。
(Of course, those with money keep building infrastructure that makes it easier to convert money into results. I have seen first-hand the largely-successful quest by quant finance companies to strangle out all existing ambition out of top UK STEM grads and replace it with the eking of tiny gains in financial markets. Mammon must be served!)
(当然,有钱的人不断建设基础设施,使得将金钱转化为成果变得更加容易。我亲眼目睹了量化金融公司在很大程度上成功地扼杀了英国顶尖 STEM 毕业生的所有雄心,并用在金融市场上微薄收益的挣扎取而代之。必须侍奉玛门!)
With labour-replacing AI, these problems go away.
随着替代人力的人工智能,这些问题就消失了。
First, you might not be able to judge AI talent. Even the AI evals ecosystem might find it hard to properly judge AI talent—evals are hard. Maybe even the informal word-of-mouth mechanisms that correctly sung praises of Claude-3.5-Sonnet far more decisively than any benchmark might find it harder and harder to judge which AIs really are best as AI capabilities keep rising. But the real difference is that the AIs can be cloned. Currently, huge pools of money chase after a single star researcher who's made a breakthrough, and thus had their talent made legible to those who control money (who can judge the clout of the social reception to a paper but usually can't judge talent itself directly). But the star researcher that is an AI can just be cloned. Everyone—or at least, everyone with enough money to burn on GPUs—gets the AI star researcher. No need to sort through the huge variety of unique humans with their unproven talents and annoying inability to be instantly cloned. This is the main reason why it will be easier for money to find top talent once we have labour-replacing AIs.
首先,你可能无法判断人工智能人才。即使是人工智能评估生态系统也可能很难正确判断人工智能人才——评估是困难的。也许甚至那些正确赞美 Claude-3.5-Sonnet 的非正式口碑机制,远比任何基准更具决定性,也会发现随着人工智能能力的不断提升,判断哪些人工智能真正优秀变得越来越困难。但真正的区别在于,人工智能可以被克隆。目前,大量资金追逐着一位取得突破的明星研究人员,因此他们的才能对控制资金的人(能够判断论文的社会反响但通常无法直接判断才能)变得可识别。但作为人工智能的明星研究人员可以被克隆。每个人——或者至少是每个有足够资金在 GPU 上挥霍的人——都能获得这位人工智能明星研究人员。无需在各种独特的人类中筛选,他们的才能尚未得到验证,并且令人烦恼的是无法立即克隆。这就是为什么一旦我们拥有替代劳动的人工智能,资金更容易找到顶尖人才的主要原因。
Also, of course, the price of talent will go down massively, because the AIs will be cheaper than the equivalent human labour, and because competition will be fiercer because the AIs can be cloned.
当然,人才的价格也会大幅下降,因为人工智能的成本将低于相应的人类劳动,并且由于人工智能可以被克隆,竞争将更加激烈。
The final big bottleneck for converting money into talent is that lots of top talent has complicated human preferences that make them hard to buy out. The top artist has an artistic vision they're genuinely attached to. The top mathematician has a deep love of elegance and beauty. The top entrepreneur has deep conviction in what they're doing—and probably wouldn't function well as an employee anyway. Talent and performance in humans are surprisingly tied to a sacred bond to a discipline or mission (a fact that the world's cynics / careerists / Roman Empires like to downplay, only to then find their lunch eaten by the ambitious interns / SpaceXes / Christianities of the world). In contrast, AIs exist specifically so that they can be trivially bought out (at least within the bounds of their safety training). The genius AI mathematician, unlike the human one, will happily spend its limited time on Earth proving the correctness of schlep code.
将金钱转化为人才的最终大瓶颈在于,许多顶尖人才有复杂的人类偏好,使得他们难以被收购。顶尖艺术家有他们真正依恋的艺术愿景。顶尖数学家对优雅和美有深厚的热爱。顶尖企业家对他们所做的事情有深刻的信念——而且他们可能作为员工也无法很好地发挥作用。人类的才能和表现与对某一学科或使命的神圣纽带惊人地紧密相连(这一事实被世界上的愤世嫉俗者/职业主义者/罗马帝国所淡化,结果却发现他们的午餐被雄心勃勃的实习生/SpaceX/基督教所吞噬)。相比之下,人工智能的存在恰恰是为了能够轻松被收购(至少在他们的安全训练范围内)。天才的人工智能数学家,与人类数学家不同,会乐于在地球上有限的时间里证明杂乱代码的正确性。
Finally (and obviously), the AIs will eventually be much more capable than any human employees at their tasks.
最终(显然),人工智能在其任务上最终将比任何人类员工更具能力。
This means that the ability of money to buy results in the real world will dramatically go up once we have labour-replacing AI.
这意味着,一旦我们拥有替代人类劳动的人工智能,金钱在现实世界中购买结果的能力将大幅提升。
Most people's power/leverage derives from their labour
大多数人的权力/杠杆来自他们的劳动
Labour-replacing AI also deprives almost everyone of their main lever of power and leverage. Most obviously, if you're the average Joe, you have money because someone somewhere pays you to spend your mental and/or physical efforts solving their problems.
替代劳动力的人工智能也剥夺了几乎每个人的主要权力和杠杆。最明显的是,如果你是普通人,你之所以有钱,是因为某个地方有人支付你,来让你花费你的智力和/或体力去解决他们的问题。
But wait! We assumed that there's UBI! Problem solved, right?
但等等!我们假设有基本收入!问题解决了,对吧?
Why are states ever nice?
为什么国家会友好吗?
UBI is granted by states that care about human welfare. There are many reasons why states care and might care about human welfare.
UBI 是由关心人类福祉的国家提供的。国家关心和可能关心人类福祉的原因有很多。
Over the past few centuries, there's been a big shift towards states caring more about humans. Why is this? We can examine the reasons to see how durable they seem:
在过去几个世纪,国家对人类的关心发生了很大变化。这是为什么呢?我们可以研究一下原因,看看它们似乎有多持久:
启蒙时代之后的道德变化,特别是自由主义和个人主义的中心化增强。
富裕与科技。前工业社会大多贫穷,以至于对穷人进行重大援助会使他们破产。许多类型的帮助(例如有效的医疗护理)也只有在新技术的支持下才能实现。
激励各州关注自由、繁荣和教育。
AI will help a lot with the 2nd point. It will have some complicated effect on the 1st. But here I want to dig a bit more into the 3rd, because I think this point is unappreciated.
人工智能将在第二点上提供很大帮助。它对第一点会有一些复杂的影响。但在这里,我想更深入地探讨第三点,因为我认为这一点被低估了。
Since the industrial revolution, the interests of states and people have been unusually aligned. To be economically competitive, a strong state needs efficient markets, a good education system that creates skilled workers, and a prosperous middle class that creates demand. It benefits from using talent regardless of its class origin. It also benefits from allowing high levels of freedom to foster science, technology, and the arts & media that result in global soft-power and cultural influence. Competition between states largely pushes further in all these directions—consider the success of the US, or how even the CCP is pushing for efficient markets and educated rich citizens, and faces incentives to allow some freedoms for the sake of Chinese science and startups. Contrast this to the feudal system, where the winning strategy was building an extractive upper class to rule over a population of illiterate peasants and spend a big share of extracted rents on winning wars against nearby states. For more, see my review of Foragers, Farmers, and Fossil Fuels, or my post on the connection between moral values and economic growth.
自工业革命以来,国家和人民的利益异常一致。为了在经济上具有竞争力,一个强大的国家需要高效的市场、能够培养熟练工人的良好教育系统,以及能够创造需求的繁荣中产阶级。它从不论阶级出身的才能中获益。它还从允许高水平的自由中获益,以促进科学、技术以及艺术和媒体的发展,从而产生全球软实力和文化影响。国家之间的竞争在很大程度上推动了所有这些方向的发展——考虑一下美国的成功,或者即使是中国共产党也在推动高效市场和受过教育的富裕公民,并面临为了中国科学和初创企业而允许某些自由的激励。与此形成对比的是封建制度,在这种制度下,获胜的策略是建立一个剥削性的上层阶级来统治一群文盲农民,并将大部分提取的租金用于对抗邻近国家的战争。有关更多信息,请参阅我对《觅食者、农民与化石燃料》的评论,或我关于道德价值观与经济增长之间联系的文章。
With labour-replacing AI, the incentives of states—in the sense of what actions states should take to maximise their competitiveness against other states and/or their own power—will no longer be aligned with humans in this way. The incentives might be better than during feudalism. During feudalism, the incentive was to extract as much as possible from the peasants without them dying. After labour-replacing AI, humans will be less a resource to be mined and more just irrelevant. However, spending fewer resources on humans and more on the AIs that sustain the state's competitive advantage will still be incentivised.
随着替代人类劳动的人工智能的发展,各国的激励——即各国应采取何种行动以最大化其在与其他国家竞争和/或自身权力方面的竞争力——将不再以这种方式与人类保持一致。这些激励可能比封建主义时期更好。在封建主义时期,激励是从农民身上榨取尽可能多的资源,而不让他们死去。在替代人类劳动的人工智能出现后,人类将不再是可开采的资源,而更像是无关紧要的存在。然而,减少对人类的资源投入,增加对维持国家竞争优势的人工智能的投入仍将受到激励。
Humans will also have much less leverage over states. Today, if some important sector goes on strike, or if some segment of the military threatens a coup, the state has to care, because its power depends on the buy-in of at least some segments of the population. People can also credibly tell the state things like "invest in us and the country will be stronger in 10 years". But once AI can do all the labour that keeps the economy going and the military powerful, the state has no more de facto reason to care about the demands of its humans.
人类对国家的影响力也将大大减少。今天,如果某个重要行业罢工,或者军队的某个部分威胁发动政变,国家必须关注,因为它的权力依赖于至少一些人口群体的支持。人们也可以可信地告诉国家“投资我们,国家在 10 年内会更强大”。但一旦人工智能能够完成维持经济运转和军队强大的所有劳动,国家就没有更多的事实理由去关心人类的要求。
Adam Smith could write that his dinner doesn't depend on the benevolence of the butcher or the brewer or the baker. The classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of the benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labour-replacing AI, this will no longer be true. If the arc of history keeps bending towards freedom and plenty, it will do so only out of the benevolence of the state (or the AI plutocrats). If so, we better lock in that benevolence while we have leverage—and have a good reason why we expect it to stand the test of time.
亚当·斯密可以写道,他的晚餐并不依赖于屠夫、酿酒师或面包师的仁慈。今天的古典自由主义者可以可信地声称,历史的弧线确实朝着所有人的自由和丰盈弯曲,而不是出于国家的仁慈,而是因为资本主义和地缘政治的激励。但是,在劳动被替代的人工智能出现后,这将不再成立。如果历史的弧线继续朝着自由和丰盈弯曲,那将仅仅是出于国家(或人工智能富豪)的仁慈。如果是这样,我们最好在拥有杠杆时锁定这种仁慈,并有充分的理由期待它经得起时间的考验。
The best thing going in our favour is democracy. It's a huge advantage that a deep part of many of the modern world's strongest institutions (i.e. Western democracies) is equal representation of every person. However, only about 13% of the world's population lives in a liberal democracy, which creates concerns both about the fate of the remaining 87% of the world's people (especially the 27% in closed autocracies). It also creates potential for Molochian competition between humanist states and less scrupulous states that might drive down the resources spent on human flourishing to zero over a sufficiently long timespan of competition.
我们有利的最好事情是民主。这是一个巨大的优势,因为现代世界许多最强大机构(即西方民主国家)中有一个深刻的部分是每个人的平等代表。然而,只有大约 13%的人口生活在自由民主国家,这引发了对世界上剩余 87%人口(尤其是 27%生活在封闭专制国家的人)的命运的担忧。这也为人道主义国家与缺乏道德的国家之间的摩洛克式竞争创造了潜力,这种竞争可能在足够长的时间跨度内将用于人类繁荣的资源减少到零。
I focus on states above, because states are the strongest and most durable institutions today. However, similar logic applies if, say, companies or some entirely new type of organisation become the most important type of institution.
我关注上述国家,因为国家是今天最强大和最持久的机构。然而,如果说公司或某种全新的组织类型成为最重要的机构类型,类似的逻辑也适用。
No more outlier outcomes?
没有更多的异常结果了吗?
Much change in the world is driven by people who start from outside money and power, achieve outlier success, and then end up with money and/or power. This makes sense, since those with money and/or power rarely have the fervour to push for big changes, since they are exactly those who are best served by the status quo.
世界上的许多变化是由那些从外部金钱和权力出发,取得非凡成功,然后最终获得金钱和/或权力的人推动的。这是有道理的,因为拥有金钱和/或权力的人很少有热情去推动重大变化,因为他们正是那些最能从现状中受益的人。
Whatever your opinions on income inequality or any particular group of outlier successes, I hope you agree with me that the possibility of someone achieving outlier success and changing the world is important for avoiding stasis and generally having a world that is interesting to live in.
无论你对收入不平等或任何特定的成功群体有何看法,我希望你能同意我所说的,某人实现非凡成功并改变世界的可能性对于避免停滞和让世界变得有趣是很重要的。
Let's consider the effects of labour-replacing AI on various routes to outlier success through labour.
让我们考虑劳动替代型人工智能对通过劳动实现非凡成功的各种途径的影响。
Entrepreneurship is increasingly what Matt Clifford calls the "technology of ambition" of choice for ambitious young people (at least those with technical talent and without a disposition for politics). Right now, entrepreneurship has become easier. AI tools can already make small teams much more effective without needing to hire new employees. They also reduce the entry barrier to new skills and fields. However, labour-replacing AI makes the tenability of entrepreneurship uncertain. There is some narrow world in which AIs remain mostly tool-like and entrepreneurs can succeed long after most human labour is automated because they provide agency and direction. However, it also seems likely that sufficiently strong AI will by default obsolete human entrepreneurship. For example, VC funds might be able to directly convert money into hundreds of startup attempts all run by AIs, without having to go through the intermediate route of finding a human entrepreneurs to manage the AIs for them.
创业越来越成为马特·克利福德所称的“雄心的技术”,这是雄心勃勃的年轻人的选择(至少是那些具备技术才能而不倾向于政治的人)。现在,创业变得更加容易。人工智能工具已经能够使小团队更加高效,而无需雇佣新员工。它们还降低了新技能和领域的入门门槛。然而,取代劳动力的人工智能使得创业的可行性变得不确定。在某种狭窄的世界中,人工智能仍然主要像工具一样存在,企业家可以在大多数人类劳动被自动化之后继续成功,因为他们提供了代理和方向。然而,似乎也很可能,足够强大的人工智能将默认使人类创业变得过时。例如,风险投资基金可能能够直接将资金转化为数百个由人工智能运营的创业尝试,而无需通过寻找人类企业家来管理这些人工智能的中介途径。
The hard sciences. The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.
硬科学。人类在硬科学领域的成就时代可能在几年内结束,因为在任何具有明确奖励信号的领域,人工智能的进步速度太快。
Intellectuals. Keynes, Friedman, and Hayek all did technical work in economics, but their outsize influence came from the worldviews they developed and sold (especially in Hayek's case), which made them more influential than people like Paul Samuelson who dominated mathematical economics. John Stuart Mill, John Rawls, and Henry George were also influential by creating frames, worldviews, and philosophies. The key thing that separates such people from the hard scientists is that the outputs of their work are not spotlighted by technical correctness alone, but require moral judgement as well. Even if AI is superhumanly persuasive and correct, there's some uncertainty about how AI work in this genre will fit into the way that human culture picks and spreads ideas. Probably it doesn't look good for human intellectuals. I suspect that a lot of why intellectuals' ideologies can have so much power is that they're products of genius in a world where genius is rare. A flood of AI-created ideologies might mean that no individual ideology, and certainly no human one, can shine so bright anymore. The world-historic intellectual might go extinct.
知识分子。凯恩斯、弗里德曼和哈耶克都在经济学领域做了技术性工作,但他们的巨大影响力来自于他们所发展和传播的世界观(尤其是在哈耶克的案例中),这使得他们比像保罗·萨缪尔森这样主导数学经济学的人更具影响力。约翰·斯图亚特·密尔、约翰·罗尔斯和亨利·乔治也通过创造框架、世界观和哲学而产生了影响。将这些人和硬科学家区分开来的关键在于,他们工作的成果不仅仅依赖于技术的正确性,还需要道德判断。即使人工智能在说服力和正确性上超越人类,关于人工智能在这一领域的工作如何融入人类文化选择和传播思想的方式仍然存在一些不确定性。对人类知识分子来说,这可能并不好。我怀疑知识分子的意识形态之所以能有如此大的影响力,很大程度上是因为它们是在天才稀缺的世界中产生的天才产品。大量由人工智能创造的意识形态可能意味着没有任何单一的意识形态,当然也没有人类的意识形态,能够再如此耀眼。历史性的知识分子可能会灭绝。
Politics might be one of the least-affected options, since I'd guess that most humans specifically want a human to do that job, and because politicians get to set the rules for what's allowed. The charisma of AI-generated avatars, and a general dislike towards politicians at least in the West, might throw a curveball here, though. It's also hard to say whether incumbents will be favoured. AI might bring down the cost of many parts of political campaigning, reducing the resource barrier to entry. However, if AI too expensive for small actors is meaningfully better than cheaper AI, this would favour actors with larger resources. I expect these direct effects to be smaller than the indirect effects from whatever changes AI has on the memetic landscape.
政治可能是受影响最小的选项之一,因为我猜大多数人特别希望由人类来做这份工作,并且因为政治家可以制定允许的规则。然而,AI 生成的头像的魅力,以及在西方对政治家的普遍厌恶,可能会在这里带来意外的变化。很难说现任者是否会受到青睐。人工智能可能会降低政治竞选中许多部分的成本,从而减少进入的资源壁垒。然而,如果对小型参与者来说,人工智能的成本过高且明显优于更便宜的人工智能,这将有利于资源更丰富的参与者。我预计这些直接影响将小于人工智能对模因景观所带来的间接影响。
Also, the real play is not to go into actual politics, where a million other politically-talented people are competing to become president or prime minister. Instead, have political skill and go somewhere outside government where political skill is less common (c.f. Sam Altman). Next, wait for the arrival of hyper-competent AI employees that reduce the demands for human subject-matter competence while increasing the rewards for winning political games within that organisation.
此外,真正的玩法不是进入实际政治,在那里有数百万其他有政治才能的人竞争成为总统或总理。相反,具备政治技能,去政府之外的地方,在那里政治技能较为罕见(参见萨姆·阿尔特曼)。接下来,等待超高效能的人工智能员工的到来,这将减少对人类专业知识的需求,同时增加在该组织内赢得政治游戏的奖励。
Military success as a direct route to great power and disruption has—for the better—not really been a thing since Napoleon. Advancing technology increases the minimum industrial base for a state-of-the-art army, which benefits incumbents. AI looks set to be controlled by the most powerful countries. One exception is if coups of large countries become easier with AI. Control over the future AI armies will likely be both (a) more centralised than before (since a large number of people no longer have to go along for the military to take an action), and (b) more tightly controllable than before (since the permissions can be implemented in code rather than human social norms). These two factors point in different directions so it's uncertain what the net effect on coup ease will be. Another possible exception is if a combination of revolutionary tactics and cheap drones enables a Napoleon-of-the-drones to win against existing armies. Importantly, though, neither of these seems likely to promote the good kind of disruptive challenge to the status quo.
自拿破仑以来,军事成功作为通往大国和颠覆的直接途径——好在——并不是真正的事情。技术的进步提高了现代化军队所需的最低工业基础,这使得现有国家受益。人工智能似乎将被最强大的国家控制。一个例外是,如果大型国家的政变因人工智能而变得更容易。对未来人工智能军队的控制可能会比以前更加(a)集中(因为大量人员不再需要参与军事行动),以及(b)比以前更易控制(因为权限可以通过代码而不是人类社会规范来实施)。这两个因素指向不同的方向,因此不确定对政变的易发性会产生什么净影响。另一个可能的例外是,如果革命战术与廉价无人机的结合使得“无人机中的拿破仑”能够战胜现有军队。然而,重要的是,这两者似乎都不太可能促进对现状的良性颠覆挑战。
Religions. When it comes to rising rank in existing religions, the above takes on politics might be relevant. When it comes to starting new religions, the above takes on intellectuals might be relevant.
宗教。当涉及到在现有宗教中提升地位时,上述对政治的看法可能是相关的。当涉及到创立新宗教时,上述对知识分子的看法可能是相关的。
So on net, sufficiently strong labour-replacing AI will be on-net bad for the chances of every type of outlier human success, with perhaps the weakest effects in politics. This is despite the very real boost that current AI has on entrepreneurship.
因此,总体而言,足够强大的替代人类劳动的人工智能将对每种类型的异类人类成功的机会产生负面影响,政治方面的影响可能最小。这是尽管当前的人工智能对创业有着非常真实的推动作用。
All this means that the ability to get and wield power in the real world without money will dramatically go down once we have labour-replacing AI.
所有这些意味着,一旦我们拥有替代人类劳动的人工智能,在现实世界中获取和运用权力的能力将大幅下降,而不需要金钱。
Enforced equality is unlikely
强制平等不太可能
The Great Leveler is a good book on the history of inequality that (at least per the author) has survived its critiques fairly well. Its conclusion is that past large reductions in inequality have all been driven by one of the "Four Horsemen of Leveling": total war, violent revolution, state collapse, and pandemics. Leveling income differences has historically been hard enough to basically never happen through conscious political choice.
《伟大的平衡者》是一本关于不平等历史的好书(至少根据作者的说法),它在批评中表现得相当不错。它的结论是,过去大幅减少不平等的原因都源于“平衡的四骑士”之一:全面战争、暴力革命、国家崩溃和大流行。历史上,缩小收入差距一直非常困难,基本上从未通过有意识的政治选择实现。
Imagine that labour-replacing AI is here. UBI is passed, so no one is starving. There's a massive scramble between countries and companies to make the best use of AI. This is all capital-intensive, so everyone needs to woo holders of capital. The top AI companies wield power on the level of states. The redistribution of wealth is unlikely to end up on top of the political agenda.
想象一下,取代劳动的人工智能已经到来。普遍基本收入已经通过,所以没有人挨饿。各国和公司之间为了更好地利用人工智能展开了激烈的竞争。这一切都需要大量资本,因此每个人都需要吸引资本持有者。顶级人工智能公司拥有与国家相当的权力。财富的再分配不太可能成为政治议程的首要问题。
An exception might be if some new political movement or ideology gets a lot of support quickly, and is somehow boosted by some unprecedented effect of AI (such as: no one has jobs anymore so they can spend all their time on politics, or there's some new AI-powered coordination mechanism).
一个例外可能是,如果某个新的政治运动或意识形态迅速获得大量支持,并且在某种程度上受到某种前所未有的人工智能影响的推动(例如:没有人再有工作,因此他们可以把所有时间都花在政治上,或者有某种新的人工智能驱动的协调机制)。
Therefore, even if the future is a glorious transhumanist utopia, it is unlikely that people will be starting in it at an equal footing. Due to the previous arguments, it is also unlikely that they will be able to greatly change their relative footing later on.
因此,即使未来是一个辉煌的超人类乌托邦,人们在其中起步时也不太可能处于平等的地位。由于之前的论点,他们在之后也不太可能大幅改变相对地位。
Consider also equality between states. Some states stand set to benefit massively more than others from AI. Many equalising measures, like UBI, would be difficult for states to extend to non-citizens under anything like the current political system. This is true even of the United States, the most liberal and humanist great power in world history. By default, the world order might therefore look (even more than today) like a global caste system based on country of birth, with even fewer possibilities for immigration (because the main incentive to allow immigration is its massive economic benefits, which only exist when humans perform economically meaningful work).
还要考虑各州之间的平等。一些州将比其他州更大程度地从人工智能中受益。许多平衡措施,如基本收入,可能在当前政治体制下难以扩展到非公民。这在美国也是如此,尽管美国是世界历史上最自由和人道主义的大国。因此,世界秩序可能默认看起来(甚至比今天更明显)像一个基于出生国的全球等级制度,移民的可能性甚至更少(因为允许移民的主要动机是其巨大的经济利益,而这些利益只有在人的工作具有经济意义时才存在)。
The default outcome? 默认结果?
Let's grant the assumptions at the start of this post and the above analysis. Then, the post-labour-replacing-AI world involves:
让我们在这篇文章的开头和上述分析中假设。然后,后劳动替代人工智能的世界涉及:
金钱将比以往任何时候都更能在现实世界中购买结果。
人们的劳动给予他们的杠杆作用比以往任何时候都要小。
通过你在大多数或所有领域的努力实现超常成功现在已不可能。
在国家内部或之间都没有资本的变革性平衡。
This means that those with significant capital when labour-replacing AI started have a permanent advantage. They will wield more power than the rich of today—not necessarily over people, to the extent that liberal institutions remain strong, but at least over physical and intellectual achievements. Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field.
这意味着,在劳动替代人工智能开始时,拥有大量资本的人具有永久优势。他们将比今天的富人拥有更多的权力——不一定是对人们的权力,只要自由制度仍然强大,但至少在物质和智力成就上。新兴者将无法击败他们,因为资本现在可以轻松转化为任何领域的超人劳动。
Also, there will be no more incentive for whatever institutions wield power in this world to care about people in order to maintain or grow their power, because all real power will flow from AI. There might, however, be significant lock-in of liberal humanist values through political institutions. There might also be significant lock-in of people's purchasing power, if everyone has meaningful UBI (or similar), and the economy retains a human-oriented part.
此外,世界上任何掌握权力的机构将不再有动力关心人们,以维持或扩大他们的权力,因为所有真正的权力将来自人工智能。然而,政治机构可能会显著锁定自由人文主义价值观。如果每个人都有有意义的基本收入(或类似的),并且经济保留人性化的部分,人们的购买力也可能会显著锁定。
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital (and maybe not even then), you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? Much like in feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle ("my great-great-grandfather fought at Bosworth Field!"), anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ("oh, my uncle was technical staff at OpenAI"). The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.
在最佳情况下,这个世界就像一个更加不平等、前所未有地静态且富裕得多的挪威:一个巨大的非人力劳动资源的池子(石油::人工智能)带来的好处流向每个人,是的,有些人比其他人更富有,但每个人都有很高的生活水平(理想情况下也能永生)。人类雄心的唯一现实形式是在你的社交网络和阶层内进行地方性的社会和政治游戏。如果你没有很多资本(也许即使有也不行),你就没有机会再影响更广泛的世界。记住:人工智能是更好的诗人、艺术家、哲学家——一切;为什么会有人关心某个普通人做了什么,除非那个人是他们个人认识的人?就像在封建社会中,“这个人为什么有权力?”的答案通常涉及一些悠久的家族历史,或许以一个在重要战役中作战的远祖结束(“我的曾曾祖父在博斯沃斯战役中作战!”),未来任何重要的人物之所以重要,都是因为他们或他们亲近的人在 AGI 之前的时代所做的事情(“哦,我的叔叔是 OpenAI 的技术人员”)。 未来的孩子们将生活在父母的阴影下,社会流动性消失。我认为你应该对此感到一定程度的存在主义恐惧,即使承认情况本可以更糟。
In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.
在最坏的情况下,人工智能亿万富翁拥有几乎无限且不受限制的权力,基于他们在取代劳动的人工智能出现时拥有的资本,形成了一个永久的贵族阶层。阶级之间的权力差距可能会让现代人感到颤栗,就像现代人认为封建等级制度是可怕的一样。但别担心——就像封建下层阶级在没有超人说服力的人工智能的情况下,主要由于他们的文化而接受了他们的世界秩序,未来的下层阶级也会如此。
In the absolute worst case, humanity goes extinct, potentially because of a slow-rolling optimisation for AI power over human prosperity over a long period of time. Because that's what the power and money incentives will point towards.
在最糟糕的情况下,人类可能会灭绝,这可能是因为在很长一段时间内,人工智能的力量优先于人类繁荣的缓慢优化。因为这就是权力和金钱的激励所指向的。
What's the takeaway? 重点是什么?
If you read this post and accept a job at a quant finance company as a result, I will be sad. If you were about to do something ambitious and impactful about AI, and read this post and accept a job at Anthropic to accumulate risk-free personal capital while counterfactually helping out a bit over the marginal hire, I can't fault you too much, but I will still be slightly sad.
如果你阅读了这篇帖子并因此接受了一家量化金融公司的工作,我会感到难过。如果你本来打算在人工智能领域做一些雄心勃勃且有影响力的事情,而阅读了这篇帖子后接受了 Anthropic 的工作,以无风险的个人资本积累为目的,同时在边际雇佣上稍微帮了一点忙,我不能太责怪你,但我仍然会感到有些难过。
It's of course true that the above increases the stakes of medium-term (~2-10 year) personal finance, and you should consider this. But it's also true that right now is a great time to do something ambitious. Robin Hanson calls the present "the dreamtime", following a concept in Aboriginal myths: the time when the future world order and its values are still liquid, not yet set in stone.
当然,上述情况确实提高了中期(约 2-10 年)个人财务的风险,你应该考虑这一点。但现在也是做一些雄心勃勃的事情的好时机。罗宾·汉森称现在为“梦境时间”,这个概念源于土著神话:未来的世界秩序及其价值观仍然是流动的,还没有确定下来。
Previous upheavals—the various waves of industrialisation, the internet, etc.—were great for human ambition. With AI, we could have the last and greatest opportunity for human ambition—followed shortly by its extinction for all time. How can your reaction not be: "carpe diem"?
以往的动荡——各种工业化浪潮、互联网等——对人类的雄心壮志是有利的。随着人工智能的出现,我们可能迎来人类雄心壮志的最后一次也是最伟大的机会——随后将是永远的灭绝。你的反应怎么可能不是:“及时行乐”?
We should also try to preserve the world's dynamism.
我们还应该努力保持世界的活力。
Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)
理性主义者对后 AGI 未来的思考过于解决主义。稻草人版本:解决道德,解决人工智能,找出最佳结构来铺满宇宙,做到这一点,就完成了。(实际的主要人物观点远没有稻草人那么简单;例如,参见保罗·克里斯蒂亚诺在这里的 23:30——但实际文化确实倾向于稻草人方向。)
I think it's much healthier for society and its development to be a shifting, dynamic thing where the ability, as an individual, to add to it or change it remains in place. And that means keeping the potential for successful ambition—and the resulting disruption—alive.
我认为,对于社会及其发展来说,保持一种不断变化、动态的状态更为健康,个人能够为其增添或改变的能力应当保持不变。这意味着要保持成功雄心的潜力——以及由此带来的干扰——的活力。
How do we do this? I don't know. But I don't think you should see the approach of powerful AI as a blank inexorable wall of human obsolescence, consuming everything equally and utterly. There will be cracks in the wall, at least for a while, and they will look much bigger up close once we get there—or if you care to look for them hard enough from further out—than from a galactic perspective. As AIs get closer and closer to a Pareto improvement over all human performance, though, I expect we'll eventually need to augment ourselves to keep up.
我们该怎么做?我不知道。但我不认为你应该把强大的人工智能的方式视为一堵不可逆转的人类过时的空白墙壁,平等而彻底地吞噬一切。墙壁上会有裂缝,至少在一段时间内,一旦我们到达那里——或者如果你愿意从更远的地方仔细寻找——它们看起来会比从银河的角度看要大得多。随着人工智能越来越接近所有人类表现的帕累托改进,我预计我们最终需要增强自己以跟上。