• 0 Posts
  • 34 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle







  • I usually run batches of 16 at 512x768 at most, doing more than that causes bottlenecks, but I feel like I was also able to do that on the 3070ti also. I’ll look into those other tools though when I’m home, thanks for the resources. (HF diffusers? I’m still using A1111)

    (ETA: I have written a bunch of unreleased plugins to make A1111 work better for me, like VSCode-like editing for special symbols like (/[, and a bunch of other optimizations. I haven’t released them because they’re not “perfect” yet and I have other projects to be working on, but there’s reasons I haven’t left A1111)


  • I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)





  • rebelsimile@sh.itjust.workstoSelfhosted@lemmy.worldCan't relate at all.
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.

    I’ll look into the Amd Strix though.


  • I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.




  • yeah I think the way I always read that question was in the hundred duck sized horses vs one horse-sized duck sense. The average woman passes by, say, in public, hundreds of men per day in a city, right? I read that question (and the implication) that they’d prefer from a safety standpoint if each one of them was a bear, which is more of a video game premise than a situation anyone would survive.


  • Fuckin’ A on this one. Think about how much companies make with entirely artificial scarcity. You can only add 1 license to this, you can only watch on one TV unless you pay us $15 a month. It costs $200 to change where you are in the database table for this flight. Complete bullshit and we need to see it for what it is and stop it. I love how she says all that will be left in their wake is dull capitalists. Exactly. Stop playing their games. Play your game.