Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. 
I’ve got to disagree with this. All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.

Current AI is a transcript generator. It can do smart stuff but it has no goals, it just responds with text when you prompt it. It feels like magic, even compared to 4-5 years ago, but it doesn’t feel like what was classically understood as AI, certainly by the public.

Somewhere marketers changed AGI to mean “does predefined tasks with human level accuracy” or the like. This is more like the definition of a good function approximator (how appropriate) instead of what people think (or thought) about when considering intelligence.





The thing that blows my mind about language models isn't that they do what they do, it's that it's indistinguishable from what we do. We are a black box; nobody knows how we do what we do, or if we even do what we do because of a decision we made. But the funny thing is: if I can perfectly replicate a black box then you cannot say that what I'm doing isn't exactly what the black box is doing as well.

We can't measure goals, autonomy, or consciousness. We don't even have an objective measure of intelligence. Instead, since you probably look like me I think it's polite to assume you're conscious…that's about it. There’s literally no other measure. I mean, if I wanted to be a jerk, I could ask if you're conscious, but whether you say yes or no is proof enough that you are. If I'm curious about intelligence I can come up with a few dozen questions, out of a possible infinite number, and if you get those right I'll call you intelligent too. But if you get them wrong… well, I'll just give you a different set of questions; maybe accounting is more your thing than physics.

So, do you just respond with text when you’re promoted with input from your eyes or ears? You’ll instinctively say “No, I’m conscious and make my own decisions”, but that’s just a sequence of tokens with a high probability in response to that question.

Do you actually have goals, or did the system prompt of life tell you that in your culture, at this point in time, you should strive to achieve goals[] because that’s what gets positive feedback?


Your argument makes no sense

It's a straight forward argument and he presented it fairly clearly so...

Maybe this will help you: https://en.wikipedia.org/wiki/Philosophical_zombie

The hard nut to crack here is nobody has am empirical test for the subjective experience of consciousness. A machine which actually possesses it, and a machine which merely emulates it and answers questions as if it has that subjective experience cannot be distinguished using any empirical test. That includes people; it's simply a matter of common courtesy and pragmatism that we assume other people have comparable subjective conscious experiences (aka they aren't p-zombies.)


Well then keep working on it.

> All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.

I have to strongly disagree with you here. This was absolutely not the case in a very large amount of science fiction media, particular in the 20th century. AIs / robots were often depicted of automatons with no self-agency, no goal setting of their own, who were usually capable of understanding and following complex orders issued in natural language (but which frequently misunderstood orders in ways humans find surprising, leading to a source of conflict.)

Almost all of Asimov's robots are like this, there are a handful of counter examples, but for the most part his robots are p-zombies that mis-follow orders.

Nonhsentient AI with no personal motivation also frequently comes up in situations where the machine is built to be an impartial judge, for instance in The Demolished Man, all criminal prosecutions need to persuade a computer which does nothing but evaluate evidence and issue judgments.

Non-sentient AIs also show up often in ship-board computers. Examples are Mother in Alien, and the Computer in at least most of Star Trek (I'm no Trekkie, so forgive me for missing counter examples and nuance, technology in that show does whatever the writers needed.)

Even the droids in Star Wars, do they ever really execute agency over their own lives? They have no apparent life goals or plans, they're along for the ride, appliances with superficial personalities.

In The Hitchhiker's Guide to the Galaxy, does Deep Thought actually have self-agency? I only recall it thinking hard about the questions posed to it, and giving nonsensical answers which miss the obvious intent of the question, causing more trouble than any of it was worth.

Ghost in the Shell; obviously has sentient AIs, but in that setting these are novel and surprising, most androids in that are presumed to be just machines with dumb programming and it's only the unexpected emergence of more complicated systems that prompt the philosophizing.


I think we’re looking at the same thing in different ways. But regardless I don’t know think a valid interpretation of classical how AI was classically depicted is as a transcript generator or an extension thereof. There’s still some notion of taking action on its own (even if it’s according to a rigid set of principles and literal interpretation of a request like an Asimov robot) that is not present in LLMs and cannot be.

> Current AI is a transcript generator. It can do smart stuff but it has no goals

That's probably not because of an inherent lack of capability, but because the companies that run AI products don't want to run autonomous intelligent systems like that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: