• doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    I think when people talk about LLMs replacing Alexa they mean the much more capable models with billions of parameters. The small models that a Raspberry-Pi can run are no use really.

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      The models I’m talking about that a PI 5 can run have billions of parameters, though. For example, Mistral 7B (here’s a guide to running it on the PI 5) has roughly 7 Billion parameters. By quantizing each parameter to 4 bits, it only takes up 3.5 GB in RAM, making it easily fit in the 8 GB model’s memory. If you have a GPU with 8+ GB of VRAM (most cards from the past few years have 8 GB or more - the 1070, 2060 Super, and 3050 and each better card in that generation hit that mark), you have enough VRAM and more than enough speed to run Q4 versions of the 13B models (which have roughly 13 Billion parameters), and if you have one with 24 GB of VRAM, like the 3090, then you can run Q4 versions of the 30B models.

      Apple Silicon Macs can also competently run inference for these models - for them, the limiting factor is system RAM, not VRAM, though. And it’s not like you’ll need a Mac as even Microsoft is investing in ARM CPUs with dedicated AI chips.

      • doodledup@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Thanks for sharing that. I have a Raspberry-Pi 4B laying around and getting dusty. I might try this.