• WoahWoah@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    19 hours ago

    It’s just basic economics. The amount of power and influence you can generate with a disinformation troll farm dramatically outweighs the cost. It’s a high impact, low cost form of geopolitical influence. And it works incredibly well.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 minutes ago

      It’s like saying that bullets and knives being more convenient than bricks for killing people are basic economics.

      Doesn’t explain why those brownshirt types have guns and knives and kill people on the streets, while you don’t have those, and the police doesn’t shoot them, and more than that, they’d arrest you were you to do something to brownshirts.

      What I wanted to take from this bad analogy is that the systems are designed for troll farms to work, and not vice versa. Social media are an instrument to impose governments’ will upon population. There are things almost all governments converge on, so the upsides of such existing outweigh the downsides.

      Our world is dangerous.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      17
      ·
      18 hours ago

      You make it sound like everyone should be doing it. We could also save a lot of money invested into courts and prisons if we just executed suspects the state deemed guilty.

  • Breve@pawb.social
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    21 hours ago

    My new rule of social media: Unless I know and trust the person or the organization making a post, I assume it’s worthless unless I double check it against a person or organization I trust. Opinions are also included in this rule.

  • 2pt_perversion@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    19 hours ago

    I’d love to debate politics with you but first tell me how many r’s are in the word strawberry. (AI models are starting to get that answer correct now though)

    • sbv@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      I tried this with Gemini. Regardless of the number of rs in a word (zero to 3), it said two.

      • Kraven_the_Hunter@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 minutes ago

        So ask it about a made up or misspelled word - “how many r’s in the word strauburrry” or ask it something with no answer like “what word did I just type?”. Anything other than, “you haven’t typed anything yet” is wrong.

      • tee9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        14 hours ago

        Llms look for patterns in their training data. So like if you asked 2+2= it would look its training and finds high likelihood the text that follows 2+2= is 4. Its not calculating, its finding the most likely completion of the pattern based on what data it has.

        So its not deconstructing the word strawberry into letters and running a count… it tries to finish the pattern and fails at simple logic tasks that arent baked into the training data.

        But a new model chatgpt-o1 checks against itself in ways i dont fully understand and scores like 85% on international mathematic standardized test now so they are making great improvements there. (Compared to a score of like 14% from the model that cant count the r’s in strawberry)

      • 2pt_perversion@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        16 hours ago

        Over simplification but partly it has to do with how LLMs split language into tokens and some of those tokens are multi-letter. To us when we look for R’s we split like S - T - R - A - W - B - E - R - R - Y where each character is a token, but LLMs split it something more like STR - AW - BERRY which makes predicting the correct answer difficult without a lot of training on the specific problem. If you asked it to count how many times STR shows up in “strawberrystrawberrystrawberry” it would have a better chance.