Our AI-generated future is going to be fantastic.

Archive link, so you don’t have to visit Substack: https://archive.is/hJIWk

  • StenSaksTapir@feddit.dk
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 months ago

    Here’s a basically fully automated service where you can generate a shitty book for $200. You can even have it printed as a paperback for more useless waste or have it AI narrated as a shitty audiobook.

    https://www.bookbud.ai/

    I hate everything about it.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        How much time and power per page?

        Then, it seems like Meta trained Llama with copyright protected books without permission, so the model might stop being free at any moment.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            10 months ago

            Heh, we’ll see.

            For starters, keep that copy safe, in case Meta has to pull down all the “illegally opensourced” copies of LLaMA. If the “authors” (publisher corporations) have their way, it will become illegal to run the model without paying them royalties, so Meta will only be able to offer it as a paid service. You might still be able pirate it though, if people are willing to share.

            For the future, when your next computer comes with neuromorphic RAM capable of running those models a billion times faster… just hope it doesn’t also come with DRM checks built in to stop illegal models from being loaded (“RAM access error: unauthorized content detected”… doesn’t that sound like every author’s dream? 🙄)

        • Norah - She/They@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          If you already have the computer for other reasons, such as gaming, are you paying for it to use it for a LLM? The limiting factor isn’t raw power either, it’s VRAM size. A GTX 1080 with 8Gb is capable of running some models. But an RTX 3060 12Gb can be bought new for really cheap and is more than enough for most people’s use at home. Raw GPU power only helps with the time it takes, but even if it took 12-24 hours, well, do you want it fast or do you want it cheap?

          There’s more details in this reddit thread, sorry for linking the hell site: https://old.reddit.com/r/LocalLLaMA/comments/12kclx2/what_are_the_most_important_factors_in_building_a/

          • watersnipje@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            0
            ·
            10 months ago

            Yeah, if you already have it then it’s not really an extra cost. But the smaller models perform less well and less reliably.

            In order to write a book that’s convincing enough to fool at least some buyers, I wouldn’t expect a Llama2 7B to do the trick, based on what I see in my work (ML engineer). But even at work, I run Llama2 70B quantized at most, not the full size one. Full size unquantized requires 320 GPU vram, and that’s just quite expensive (even more so when you have to rent it from cloud providers).

            Although if you already have a GPU that size at home, then of course you can run any LLM you like :)