I’m gonna start referring to my own lies as “hallucinations”. I like the implication that I’m not lying, but rather speaking truthfully, sincerely, and confidently about things that never happened and/or don’t exist. Seems paradoxical, but this is what we’re effectively suggesting with “hallucinations”.
LLMs necessarily lack things like imagination, or an ego that’s concerned with the appearance of being informed and factually correct, or awareness for how a lack of truth and honesty may affect users and society. In my (not-terribly-informed) opinion, I’d assert that precludes LLMs from even approximate levels of intelligence. They’re either quasi-intelligent entities who routinely lie to us, or they are complex machines that identify patterns and reconstruct plausible-sounding blocks of text without any awareness of abstract concepts like “truth”.
So infallibility is one of the necessary criteria for AGI? It does seem like a valid question to raise.
Edit due to rate-limiting, which in turn appears to be due to the inexplicable downvoting of my question: since you (JumpCrisscross) are imputing a human-like motivation to the model, it sounds like you're on the side of those who argue that AGI has already been achieved?
https://x.com/m2saxon/status/1979349387391439198