• morrowind@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    7 days ago

    Deepseek is an absolutely massive model, it’s not the one people will be running. Rather, look at qwen/qwq, gemma and a number of other smaller ones

    • ParetoOptimalDev@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      7 days ago

      No, people who want something approaching chatgpt but local want to run at least deepseek V3 32B.

      Qwen at least fares much worse for my usage as do deepseek V3 under 32B.

        • vintageballs@feddit.org
          link
          fedilink
          Deutsch
          arrow-up
          1
          ·
          2 days ago

          They probably confused the R1 Qwen distill with something else. Afaik there is no 32b model from DeepSeek directly.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        I run deepseek-r1:14b locally, though it needs to go into RAM and runs slower its still a reasonably good speed. Keeps up with reading it. Should try a larger one at some point, but its quite a bit to download when you get to the larger ones. Usually run ~7b size as that can fit in VRAM and runs way faster.