• Halosheep@lemm.ee
    link
    fedilink
    arrow-up
    64
    arrow-down
    28
    ·
    7 months ago

    Good ol lemmy ai discussions, featuring:

    • that one guy that takes the confirmation bias too far!
    • might say things like “wow and this is going to take our jobs?”
    • Asking an llm to do things it’s particularly bad at and being surprised that it isn’t good at it
    • cherry picked results
    • a bunch of angry nerds

    I swear lemmy is somehow simultaneously a bunch of very smart, tech inclined people but also a bunch of nerds who close their eyes and cover their ears while screeching nonsense the moment something they don’t like comes about.

    Are you all just like, 15-18? Am I just too old?

    • Corgana@startrek.website
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      7 months ago

      Asking an llm to do things it’s particularly bad at and being surprised that it isn’t good at it that the company that makes it says it’s really, really, good at it.

      This image isn’t making fun of GPT, it’s making fun of the people who pretend GPT is something it’s not.

      • Halosheep@lemm.ee
        link
        fedilink
        arrow-up
        7
        arrow-down
        7
        ·
        edit-2
        7 months ago

        Well, I was referring generically to the few hundred other similar posts I’ve seen on lemmy. Did OpenAI say that chatGPT is particularly good at identifying when the user is trying to trick it? “solve this puzzle” would imply there is a puzzle to be solved, but there clearly isn’t.

        But you’re right, I don’t even care if people make fun gpt, it’s funny when it gets things wrong. I just think that lemmy users will be like “see this thing is stupid, it can’t answer this simple question!”, when you can ask it, in plain human language, to do some things that an average user would find really difficult.

    • Bigoldmustard@lemmy.zip
      link
      fedilink
      arrow-up
      13
      arrow-down
      8
      ·
      edit-2
      7 months ago

      If you were as old as you claim you wouldn’t have made this list because you would have seen the last hype. I was there for 3d tv. How is 3d tv going btw? I know it’s not the same thing, but it’s not that far off.

      You mention LLMs being judged for stuff they don’t do well. What, exactly do they do well? Ad-copy? What is the use scenario? Shitty books with incoherent stories? Shitty children’s’ books with, you guessed it, incoherent stories? SUMMARIES!!! What is it good for?

      • Halosheep@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        7 months ago

        Well, I had an issue where I needed to scrape a website for a bunch of individual links to specific pages for contract information so I could dynamically link a purchase order line to that page within our ERP. I’m not particularly good at scripting with html/Javascript so I just asked chatGPT for some help and it gave me a script to do it in like 4 seconds.

        Seemed pretty decent for that.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      7 months ago

      I’m here, I’m not young, I’m tech inclined.

      Smart? 🤷‍♂️

      I’m just sitting here wondering where the fucking cabbage came from.

      Whatever. I’m pretty safe, I do IT, and LLMs are interesting, but they’re shit at plugging in stuff like power cables and ethernet, so I’m safe for now.

      When the “AI” can set up the computers, from unboxing to a fully working desktop, I’ll probably be dead, so I equally won’t care. It’s neat, but hardly a replacement for a person at the moment. I see the biggest opportunity with AI as personal assistants, reminding you of shit, helping you draft emails and messages, etc… In the end you have to more or less sign off on it and submit that stuff. AI just does the finicky little stuff that all of us have to do all the time and not much else.

      … This comment was not generated, in whole or in part, by AI.

      • 31337@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        The set up is similar this well-known puzzle: https://en.wikipedia.org/wiki/Wolf,_goat_and_cabbage_problem

        It was probably trained on this puzzle thousands of times. There are problem solving benchmarks for LLMs, and LLMs are probably over-trained on puzzles to get their scores up. When asked to solve a “puzzle” that looks very similar to a puzzle it’s seen many times before, it’s improbable that the solution is simple, so it gets tripped up. Kinda like people getting tripped up by “trick questions.”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      7 months ago

      but also a bunch of nerds who close their eyes and cover their ears while screeching nonsense the moment something they don’t like comes about.

      This is too true.

      It seems like a recent thing, not just a Lemmy thing.

      But yeah, it’s pretty wild providing linked academic papers and having people just downvote it. Not really dispute or reply to it, just "no, I don’t like this, so fuck its citations."🔻

      Up until maybe 3-4 years ago I don’t ever recall that happening.