Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We'll know AGI has arrived when AGI researchers manage to go five minutes without publishing hallucinated citations.

https://x.com/m2saxon/status/1979349387391439198



Came from the Google Docs to BibTeX conversion apparently

https://x.com/m2saxon/status/1979636202295980299


I’m gonna start referring to my own lies as “hallucinations”. I like the implication that I’m not lying, but rather speaking truthfully, sincerely, and confidently about things that never happened and/or don’t exist. Seems paradoxical, but this is what we’re effectively suggesting with “hallucinations”. LLMs necessarily lack things like imagination, or an ego that’s concerned with the appearance of being informed and factually correct, or awareness for how a lack of truth and honesty may affect users and society. In my (not-terribly-informed) opinion, I’d assert that precludes LLMs from even approximate levels of intelligence. They’re either quasi-intelligent entities who routinely lie to us, or they are complex machines that identify patterns and reconstruct plausible-sounding blocks of text without any awareness of abstract concepts like “truth”.

Edit: toned down the preachiness.


This looks like a knee-jerk reaction to the title instead of anything substantial.


It does seem a bit ridiculous…


So infallibility is one of the necessary criteria for AGI? It does seem like a valid question to raise.

Edit due to rate-limiting, which in turn appears to be due to the inexplicable downvoting of my question: since you (JumpCrisscross) are imputing a human-like motivation to the model, it sounds like you're on the side of those who argue that AGI has already been achieved?


> infallibility

Lying != fallibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: