Oh we absolutely do. And we tell lies, and we misunderstand, and miscommunicate.
But not all the time, and not everyone. So if your friend if they’d like dinner, you expect the answer to be true to what they want, not just whatever sounds good to the general population. If you read a scientific journal, you expect the scientists to represent the facts and even the meaning of their research, not parrot some ideas from a half-forgotten textbook. And if you see a professional counsellor, you expect them to have a good understanding of human nature, and to genuinely empathise with your situation, and have good ways to help you out.
And of course all three of those examples fail sometimes, which is why as part of life we learn who we can trust and to what extent.
Nah, more that I forget how dumb people can be sometimes: I was reminded recently that there’s plenty of examples of people spouting LLM-like answers; but I still contend that even most people, trusted in their proper areas, talk with meaning and comprehension.
As to LLMs, perhaps I haven’t given them enough chance. But I have experimented a while myself, read reports of others, and delved into the understanding of how their mathematical models work. So I’m not exactly clueless.
Oh we absolutely do. And we tell lies, and we misunderstand, and miscommunicate.
But not all the time, and not everyone. So if your friend if they’d like dinner, you expect the answer to be true to what they want, not just whatever sounds good to the general population. If you read a scientific journal, you expect the scientists to represent the facts and even the meaning of their research, not parrot some ideas from a half-forgotten textbook. And if you see a professional counsellor, you expect them to have a good understanding of human nature, and to genuinely empathise with your situation, and have good ways to help you out.
And of course all three of those examples fail sometimes, which is why as part of life we learn who we can trust and to what extent.
I would argue that all of the cases you presented fail at a comparable rate compared to foundational LLMs
And I would argue that’s utter nonsense and the very existence of sane rational speech disproves it.
I would argue that you’ve clearly formed your opinion without spending significant time giving foundational LLMs a chance
Nah, more that I forget how dumb people can be sometimes: I was reminded recently that there’s plenty of examples of people spouting LLM-like answers; but I still contend that even most people, trusted in their proper areas, talk with meaning and comprehension.
As to LLMs, perhaps I haven’t given them enough chance. But I have experimented a while myself, read reports of others, and delved into the understanding of how their mathematical models work. So I’m not exactly clueless.
That’s impressive for someone who seems clueless
I would encourage you to give foundational large models a chance
I think you’ll find that (barring intentionally subversive inputs) the largest and most powerful models basically don’t hallucinate
O1 in particular is better than humans in my experience