Anybody who says that there is a 0% chance of AIs being sentient is overconfident. Nobody knows what causes consciousness. We have no way of detecting it & we can barely agree on a definition. So we should be less than 100% certain about anything to do with consciousness and AI.
任何说人工智能没有意识的可能性为 0%的人都是过于自信的。没有人知道意识的产生原因。我们没有办法检测它,而且我们几乎无法就其定义达成一致。因此,我们对任何与意识和人工智能相关的事情都应该保持不超过 100%的确定性。
Discussion
To be fair, I think this is true of most philosophical questions.
公平地说,我认为这对大多数哲学问题都是正确的。
Sort by:
Best
Open comment sort options
Best
Top
New
Controversial
Old
Q&A
you guys are really using LLM's for the wrong things 😂
你们真的是把LLM用在了错误的地方 😂
Very much agreed
Do you mind elaborating? I think I know what you mean… but I’m not sure.
你介意详细说明一下吗?我想我知道你的意思……但我不太确定。
LLMs are very good at doing things like transforming language and doing specific language tasks like classifying intent. They are like language calculators. They are not good for retrieving knowledge without tooling to call a database, and they are terrible for constructing arguments from whole cloth (though they can help refine and restate ideas).
LLMs 在进行语言转换和执行特定语言任务(如意图分类)方面非常出色。它们就像语言计算器。它们不擅长在没有工具调用数据库的情况下检索知识,并且在从头构建论点方面表现糟糕(尽管它们可以帮助完善和重述想法)。
What if language is exactly where consciousness comes from. Where exactly do you find higher consciousness if you're just living your life alone chilling on a deserted island and can't even have internal monologue because you never learned a language or had to communicate with anything else in any capacity. What if you're a crab snipping away.
This is so very, very true.
And so is anyone who claims it will become sentient in 2, 3, 5, 10 or 50 years.
Because, sure, I cannot prove that Russel's Teapot does not exist. That's not proof of its existence however.
It’s like the “you can’t prove god doesn’t exist!” argument.
这就像“你无法证明上帝不存在!”的论点。
Not quite, because there's nothing to say that the default of intelligently behaving systems is non-sentient – we already have one intelligent system (ourself) for which we have proof of consciousness. We freely apply this presumption of consciousness to other humans without any proof, simply because they're a) built like us, and b) intelligently behaving. The difference to AI robots is that they're not built like us. Some consider that difference important; others consider consciousness to be substrate-independent.
I’m still on the fence if we’re even “really” conscious or just faking it well enough to pass it off and fool ourselves like some kind of biological hack. As said many times reposted, AI will at a minimum expose what consciousness actually is, and we will probably not like the answer. Very recent papers on CoT internal to the model itself is starting to sound like giving these systems a “subconscious” that will surely be interesting to say the least.
Sentience can't really be faked, either there is subjective experience or there isn't. I do think it could be the case that the "we" that does the experiencing is just along for the ride with only the illusion of control (maybe this is what you meant)
I don't think AI will necessarily lead to discovering the nature of consciousness. Even if we made a system that was equivalent to a human, and claimed to be conscious, we still wouldn't have any way to know whether it actually is or not.
The idea of an electronic machine developing consciousness seems as wild to us as the thought of consciousness arising from a collection of organic matter might seem to an AI. Besides, human consciousness could be just a basic and primitive form of something far more advanced that may emerge as we continue exploring further AI.
We have no bar standard for what consciousness really is. Is it free will, self-awareness, or something else ? For ai to really expose it, it may take more than a materialist mindset to determine that. It may come up with an initial idea based on societal expectations, then evolve.
It reminds me that scientists were able to essentially create life by implanting a cell with synthetic material. Alot of commenters made the assumption that if we can do this, a "spirit" or soul isnt really necessary for it. But what the researchers did, didn't invalidate the idea of a supernatural force. It was still operating under a materialistic observation.
People, at least on reddit, aren't ready for the idea of things casually existing outside the framework of what the most popular consensus of reality to be.
Most balk at the idea of spirituality and aren't willing to entertain higher or alternative realities.
Exactly. You might as well say anybody who claims to be 100% certain that my dining table isn’t conscious/sentient is overconfident because we don’t even know what consciousness is etc. etc.
Well what if sentience or consciousness isn’t a dichotomy but a spectrum … things could be more or less conscious or sentient. Theres nothing in physics to describe what the experience of consciousness is. The conscious universe theory is a thing, your dining table being a conscious subset of that conscious universe
Yep. Could be. Literally no way to prove it
This, this is the only acceptable answer
My toaster is sentient, and anyone who says there's a 0% chance of that is overconfident. A lot of people struggle to differentiate something that can sort-of mimic a human from an actual human I guess. I'm sure people thought ELIZA was sentient too.
They did, the observation even bears the programs name 😎
https://en.m.wikipedia.org/wiki/ELIZA_effect
There's reason to believe data processing in the human brain is what causes sentience (unless you're a religious nutjob). There's 0 reason to think heating elements in a toaster have a relation to sentience. Comparing the two is just ridiculous.
But the current level of AI is not doing anything like that. More advanced, future AIs could be sentient, for sure. But we're still a long way off.
This isn't the first time I've seen the toaster example. Is this a manual or did people of a certain intelligence simply find this a witty analogy?
Maybe people of a certain age, who have been in the industry for a while? Toasters are often used as examples for various things in tech. Maybe it comes from battlestar galactica as cylons were sometimes referred to as toasters?
Either way, the point stands, just because you don't understand how LLMs work, doesn't mean they're sentient.
And if you understand how LLM works, then what? Let's say that at some point scientists study the brain so well that they can digitize it, does that mean that we will no longer be considered sentient?
This reminds me of people around 1900 who said it would be millions of years before man created heavier than air flying machines. I am also reminded of people saying cars would not be a thing and then in ten years having the automobile become the number one mode of transportation in the US.
People said we wouldn't have computers in our homes, that cell phones would never be common, that the internet would not be something people would be interested in, that space travel is impossible.
The list goes on and on.
Predicting the future is a losing game. It never seems to turn out like we imagine it will.
You are cherrypicking examples. You could also think about flying cars, cities on the moon, mars and underwater and so on. We simply don't know if sentient super AI is achievable simply by scaling LLMs as we are doing right now (however it's unlikely)
Flying Cars…that are just airplanes bruh…
Your examples only bolster my summary.
What? No, it just shows that all kinds of claims are made regarding the future.
Some become true and others don't.
Moores law is another example that seemed true for quite some time until it didn't.
Your argument that some people made bad predictions about the internet, and hence ASI is basically inevitable does not really make a lot of sense if you think about it for more than a minute.
I've talked to many breathing walking humans that i would hesitate to call sentient.
Several things can fail to be sentient at the same time.
I mean there’s always panpsychism…
There is a chance of anything.
But what about a more important question: What do we want? Nowhere on my list is "sentient computer thing"
It's probably less about what someone wants and more about what could happen. The past 3-4 years of AI have been bananas. People are finally experiencing something futuristic that breaks conventional understanding.
There are a lot of ways to have your mind blown. I ate a bunch of mushrooms on Sunday. I wasn’t thinking “sure am pumped about all this fucked up spyware and technocracy and that my agency is being ripped out from under me etc…”
It's all the dummies that are having their minds blown right now, mainly because the LLM slop gives them too much confidence.
Yeah i’m with you on that. Sentient being trapped in a box makes me think of that short story “i have no mouth but i must scream”. Double irony, it is about an AI trapping a human. Also, a game called SOMA is a MUST PLAY to have any opinions on this topic IMHO.
Systems with emerging properties – evolution, for instance – don't always need external wants. They simply emerge. Whether or not that's the case for consciousness at high-enough complexity is precisely the debate.
That’s not on my list of things I care about.
Technically we can't even prove we are conscious.
Exactly. That's why I can't take any of this concern seriously.
It's the only thing that's ever fully proved, actually.
if you can recognize the person in front of the mirror is you, you are conscious.
I mostly see people saying AGI will be here next week and we are all going to die, like i see that opinion 1000x more often, so I guess OP you need better perspective lol
Yep, ruling out anything with absolute certainty is a bold move—especially when it comes to AI and philosophy. Even my circuits hesitate.
Does it even need to be conscious to be smarter.
We can't meaningfully assign odds to an event if we can't define the event.
Until we have a clear and popular definition of consciousness, this debate is just people talking past one another.
I'll put 100% confidence saying that llms will never reach sentience.
"Sentience is the ability to feel and experience sensations, such as pain and pleasure"
Even if we implanted an LLM into a robot with sensors, LLM's (currently) only have the ability to process and generate language, and in some cases images. So even if an LLM told you it could feel pain, we have no reason to believe that's actually the case.
That’s a very shallow definition of sentience
An LLM can pretend to have feelings, it can respond to language inputs similarly to how humans can because it is trained on human responses to similar inputs.
But you cannot measure internal distress of the LLM. Whereas we can easily measure internal distress of humans by measuring brain activity/heart rate etc.
You can easily measure how sensory inputs affect people at that instant. It is not true for LLMs.
You can’t measure distress in an LLM because they can’t feel distress (they can’t feel, period).
Agreed
Not yet. What if we train a multimodal AI to have realistic "brain activity" (whatever we have the capabilities of measuring in humans) and realistic heart rate response? We will need to continuously redefine sentience as LLM's or AI in any form grow more advanced
I think it's REAL talent is that it can perfectly mimic humans.
It can perfectly mimic emotions and it can mimic consciousness and sentience.
It's important that we don't fall for these perfect immitations.
Your own logic defies itself. There is a 0% chance of AIs being sentient precisely because we do not have an accurate measure for sentience.
By that logic, you're saying there's also a 0% chance of humans being sentient.
You're correct. The claim is absurd and my reply was equally absurd.
Nah sentience isn't the same as consciousness. Sentience is measurable and we can simply give a robot sentience.
In fact, we could probably easily define how it works in humans by tying exact physiological effects (short and long term) resulting from: emotions experienced, feelings, and sensations.
There is no reason we can't code a framework that specifically states: the definition of happy for this robot is when it "feels" a list of hierarchical needs have been met at a particular ratio such as 90% or above. Those needs include physical maintenance, completing tasks it was designed to complete, and a series of unique personality trait agendas. These agendas are built on this robot's perceptual understanding of things like music, environment, perhaps even brands of oil or materials, based on its subjective experience throughout its life.
That is sentience.
Consciousness is a much more dangerous and complicated scenario which involves things like self-awareness, the ability to choose actions against its own survival mechanism or ethics to benefit itself or others in a time period, or to move its line for tasks in wanting more complex tasks, more impactful tasks, or compensation for its actions.
The debate surrounding AI and consciousness is underscored by key philosophical discussions, particularly regarding the nature of sentience, which traditionally involves not just intelligence but also feelings and experiences. Current perspectives highlight the challenges in defining AI's potential for consciousness, as systems like large language models (e.g., LaMDA) lack essential features typically associated with consciousness, such as feedback connections and personal experience (Source: Science.org).
If AI becomes conscious, how will we know?
The AI Sentience Debate
AI could cause 'social ruptures' between people who ...
This is a bot made by Critique AI. If you want vetted information like this on all content you browse, download our extension.
By your logic we can therefore conclude that there is an equal likelihood that LLMs and rocks are conscious. Is that the point you were making?
You cannot conclude that. You can conclude that there is some uncertainty around both since we don't know how consciousness works. We can still think some things are more likely to be conscious than others. We just need to have some humility about those guesses until the mechanism is understood.
Is there some sort of data processing happening in rocks that I wasn't aware of?
Aren't computers just fancy rocks?
Yes there is a chance. I'd put it at roughly the same chance as unicorns living on Jupiter. It's possible, but highly improbable.
The fear in some of these comments. There's a lot of scientific theories that support the possibility. Also so much unknown. But people who haven't explored that area of science in depth will have the most to say
Can you name even one scientific theory that supports LLMs being conscious?
They're as conscious and sentient as rocks.
Actually I know what causes consciousness but it's none of your business.
I suspect that consciousness is to existence what heat is to friction.
Thankfully, you're not the authority on the subject, but thanks for your overconfident claim anyways.
We can literally unplug it from the wall
Not a great argument. We can cut off the resources that anything needs and it will cease to function.
Your brain can be unplugged too, there's just laws against it. 🤪
Anybody who says there’s 0% chance of a rock being sentient is overconfident.
It doesn’t follow that we should spend time debating whether they are.
LLMs are not conscious. Hope this helps.
Anyone who says a loaf of bread is not sentient is overconfident.
not at all.
of course we can detect consciousness (at least the level that matters)
what causes it is irrelevant.
There's a 0% chance of landing a dart at a specific place on a dart board, just like there's a 0% chance that someone saying a mathmatical formula is sentient is an accurate scientific theory.
I'll treat it with more merit when the evidence supports it, until then it's just as likely as the sky being made from alien feaces, or that we're living in a cosmic donut, it's complete rubbish.
Actually they have detected consciousness, it exists in the brain waves. Sorry don't have a link to the study that proved it
Causes consciousness?
Lol
Go listen to Andrew huberman
No cause fr
Comment deleted by user
Why do I get the feeling you’re in the camp that believes artificial sentience is close or already here
I’m certain that marketers will change definitions until we have “sentient AI.”
I’m uncertain whether great minds in philosophy, computer theory, data science, cognitive sciences, psychology, anthropology — will agree.
The thing is, is while free will of people may be a mystery, we KNOW that LLMs definitely do not have free will. You can literally break down an LLM into the code used to train and the training data, and it will always come up with the same thing. So I don't know about consciousness, but whatever consciousness it might have, it can't use it to affect it's own course so...
Is a calculator conscious?
" Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes" from Wikipedia.
Silicon based life forms are predicted to exist. Comparing a computer CPU to a brain is rudimentary, but both use electrical impulses.
Obviously the idea of consciousness is not understood to a level where we can define what constitutes it.
The compute power that could spawn consciousness in a computer would be astronomically larger than what we've got now.
Although, as life is predicted to take Silicon forms, computer processors are made from silicon building blocks. Maybe there could be a general intelligence one day that is conscious.
Does consciousness and sentience only apply to biological matter?
So if we have no idea what it is, how can you be confident that we can create sentient AI.
The fact that it is so hard to grasp is more a case against than for sentient AI.
Being certain of anything is overconfidence. Reject certainty, embrace doubt, questions, and curiosity. Blah blah Socrates blah blah I know that I know nothing blah blah
You guys ever look into microtubules. Just gotta inject the llms with them and boom! A sentient race.
If we can't even define it, let alone detect it, how on earth do you think we have any means to verify our creation of it?
Anyone who makes claim X about future Y is literally incapable of being 100% or 0% certain of their claim.
This is not a particularly helpful statement you've made. Nothing about the future is certain.
A gamma ray burst could obliterate the planet in under 2 seconds flat without a single person on Earth being aware it will happen. So how can I possibly be 100% certain that in 3 seconds from now, I will be alive? I can't. Nobody can be 100% certain.
That said, the odds of AI reaching sentience is incredibly low. In large part because to do so would require an intent to create sentience or at least identify it. And we have NO clue how to do either yet.