• 22 Posts
  • 86 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • Lex Luthor is as a concept, a corrupt self serving villain who readily sacrifices his underlings to get ahead. But gains popularity and power anyway.

    The difference is that in the cartoon universe, they had to make up a believable reason why the US citizens would vote for Lex. Ex: invented a Kryptonite power plant or something.

    In the real world, it turns out you can just demonize immigrants.


  • dragontamer@lemmy.worldtoMicroblog Memes@lemmy.worldJust imagine
    link
    fedilink
    English
    arrow-up
    190
    arrow-down
    1
    ·
    edit-2
    12 days ago

    Lex Luthor literally becomes president in many versions of the Superman cartoons.

    The idea of a rich billionaire with a narcissist Messiah complex with a bone to pick with heroes and actual helpful people that becomes popular and eventually the US President is practically a trope. Apparently this generation has forgotten the message.




  • dragontamer@lemmy.worldtoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    edit-2
    2 months ago

    My post above is 376 characters, which would have required three tweets under the original 140 character limit.

    Mastodon, for better or worse, has captured a bunch of people who are hooked on the original super-short posting style, which I feel is a form of Newspeak / 1984-style dumbing down of language and discussion that removed nuance. Yes, Mastodon has removed the limit and we have better abilities to discuss today, but that doesn’t change the years of training (erm… untraining?) we need to do to de-program people off of this toxic style.

    Especially when Mastodon is trying to cater to people who are used to tweets.

    Your post could fit on Mastodon

    EDIT: and second, Mastodon doesn’t have the toxic-FOMO effect that hooks people into Twitter (or Threads, or Bluesky).

    People post not because short sentences are good. They post and doom-scroll because they don’t want to feel left out of something. Mastodon is healthier for you, but also less intoxicating / less pushy. Its somewhat doomed to failure, as the very point of these short posts / short-engagement stuff is basically crowd manipulation, FOMO and algorithmic manipulation.

    Without that kind of manipulation, we won’t get the kinds of engagement on Mastodon (or Lemmy for that matter).


  • dragontamer@lemmy.worldtoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    120
    arrow-down
    6
    ·
    edit-2
    2 months ago

    Because Threads and BlueSky form effective competition with Twitter.

    Also, short form content with just a few sentences per post sucks. It’s become obvious. That Twitter was mostly algorithm hype and FOMO.

    Mastodon tries to be healthier but I’m not convinced that microblogs in general are that useful, especially to a techie audience who knows RSS and other publishing formats.



  • ATMega328p is old and obsolete. But the assembly language remains used in modern versions of this chip, such as the AVR DD or AVR EB.

    Maybe ATMega328p is a more popular name that people aren’t using the “modern” chip names or “AVR Assembly”, but the code should mostly work on modern chips as well.

    Hmmmmmmmm…

    A collection tutorials for using assembly language on the command line to program AVR microcontrollers such as the ATmega328p microcontroller used in the Arduino.

    The actual link shows that the author of the tutorials knows the difference. He says its an AVR Assembly tutorial (for example the ATmega328p), which is the correct way of presenting the information.








  • You know that an I2C bus is… a bus… right?

    That means you can attach a lot of things onto the same bus. There’s capacitance limits, but you’re probably fine attaching 8 things or so to the same I2C bus. In theory software supports 127-additional devices to be attached, but by then parasitic capacitance would have wrecked your signal. The slower you run the bus and the higher power (ie: lower resistance on the pull-up resistors) the more stuff you can attach to the I2C bus.


    There’s clearly a UART and SPI on the schematic here. So plenty of things you can use, as well as GPIO. There’s also whatever additional features from the STM32L412


    The real problem is that I2C with 10k pullup resistors at 1.8V will pull… 180,000 nanoamps of current. So I2C absolutely will obliterate your low-power specs. Low-power design is a pain in the ass.


  • This is actually a big deal, but for the wrong reasons people expect.

    Many, many, many chips can do this. Its not the chip that’s the hard part, but the board, which includes the voltage regulator and RTC pieces, all designed for minimal power. Case in point: a tantalum capacitor leaks roughly 1uA (or 1000nA), so any tantalum capacitor screw you over.

    Creating a design AND testing / verifying it at under 1uA sleep is difficult. Its well beyond just choosing a chip and programming its sleep state. So much of the board needs to be thought of holistically.





  • Since we’re slinging unsolicited advice, here’s a bit more: if someone shares their accomplishment, regardless of how fundamentally flawed it is, it costs you nothing and is far more helpful to say “Hey, that’s awesome! I like how you did $FEATURE. Great job!” and stop right there than to be condescending and nitpicky.

    Sure. You first. Please tell me what $FEATURE of this cluster you like. The best I got is “It looks like Cray from the 80s”, but I’ve usually cared about more software or hardware features than just looks.


    I don’t necessarily think that clusters need to be built out of the latest-and-greatest parts. I really do think that a Rasp.Pi Cluster with MPI is more than enough for many students and hobbyists. I also think there are other parts you can use to do that (ex: maybe use a TI Sitara or something), and you’d actually get something respectable from a software perspective.

    And BTW: Zynq FPGA is low-end and relatively basic. Its again, the software (or in this case, the VHDL or Verilog design you program into the FPGA). Everyone in this hobby can afford a Zynq, with some dev-boards in the $150 range. That’s why I pushed it in my post earlier. If that’s still too much for you, there’s cheaper FPGAs but Zynq is a good one to start with since its an ARM core + FPGA combo, which is very useful in practice.


  • FRAM seems like the wrong technology for a looper. In particular, FRAM wears out on both reads and writes.

    I’d expect SRAM to be best, as most “loopers” for musicians are only active while the device is on (ie, non-persistent), with Flash to be 2nd best (you lose out on write-durability, but rereading durability is infinite unlike FRAM).

    EDIT: https://www.fujitsu.com/uk/Images/MB85RS4MT.pdf

    So that’s 10^13, or 10-trillion writes (which includes reads for FRAM). So if its read 10 times per second, that’s 1-trillion seconds or I guess 30,000+ years of use. Hmmm… maybe that’s enough durability then. Lol. I guess that’s fine.


  • I don’t like these in most cases. Before yall yell at me, lemme explain.

    1. Node-to-Node communication is a massively important problem. The easiest way to solve node-to-node communication is to have all the devices on the same silicon die. IE: Buy a 64-core EPYC. (Note: internally, AMD actually solved die-to-die communications through their infinity fabric and that’s the key to their high-speed core-to-core communications despite having so many cores on one package).

    2. Node-to-Node communication is a massively important problem. Once you maximize the size of a singular package, like 64-core EPYCs, the next step is to have chip-to-chip communications. Such as the Dual-socket (2x CPUs running on one motherboard). In practice, this is an extension to AMD’s Infinity Fabric. Note that Intel has a Ultrapath Interconnect that works differently, but has similar specs (8-way cpu-to-cpu communications, NUMA awareness, etc. etc.)

    3. Node-to-Node communication is a massively important problem. Once you’ve maximized the speed possible on a singular motherboard, your next step is to have a high-speed motherboard-to-motherboard connection. NVidia’s NVLink is perhaps the best example of this, with GPU-to-GPU communications measured on the order of TBs/second.

    4. Node-to-Node communication is a massively important problem. Once you’ve maximized NVidia’s NVLink, you use NVidia’s NVSwitch to expand communication to more GPUs.

    5. Node-to-Node communication is a massively important problem. Once you’ve maximized a cluster with Dual-socket EPYCs and NVLink + NVSwitch GPUs, you then need to build out longer-scale communication networks. 10Gbit Ethernet can be used, but 400Gbit Infiniband is popular amongst nation-state supercomputers for a reason. I think I’ve read some papers that 100Gbit Fiber Optics or 40Gbit Fiber Optics is a good in-between and yields acceptable results (not as fast as Infiniband, but still much faster than your standard RJ-45 based consumer ethernet). 10Gbit Ethernet was used on some projects IIRC, so if you’re trying to save money on the interconnect, its still doable.


    So when I see "Someone builds a clustered-computer out of 1Mbit (aka: 0.000001 Tbit/sec) I2C communications, its hard for me to get excited, lol. The entire problem space is in practice, defined by the gross difficulty in computer-to-computer communications… and I2C is just not designed for this space. Surprisingly, modern supercomputers are “just servers”, so anyone who has experience with the $20,000-class set of Xeons or EPYCs has experience with real supercomputer hardware today. (Which yall can rent for just a few dozen $ per day from Amazon Web services btw today, if you really wanted to. Cloud computing has made high-performance computing accessible in practice to even the cheapest hobbyist)

    Now… when am I excited about “cheap clusters” ?? Well, the #1 problem with the approach I listed with 1-5 above is that such a beastly computer costs $10 Million or more. Even entire nation-states can struggle to find the budget for this, let alone smaller corporations or hobbyists. But the “skills” needed to program a $10-million supercomputer are still required, so we need to think about how to train the next generation of programmers to use these expensive supercomputers. (Unlike an AWS rented instance, the Rasp. Pi cluster has to be taken care of by the administrator, building real administration skills)

    There was a project about Ethernet + Rasp. Pi that used MPI (Message Passing Interface) that handles this latter case. By using Rasp. Pis + standard ethernet switches as the basis of the cluster, it would only cost $thousands of dollars, not $millions to build a large cluster of hundreds of Rasp. Pis. MPI is one of the real APIs that are used on the big-boy nation-state level supercomputers as well. Rasp. Pi are not NUMA-aware nor do they have a good GPU-programming interface however, so its not a perfect emulation of the issues. But its good enough to teach students. A Rasp. Pi supercomputer never will be “useful” in the real world outside of training, but student-training is a good enough reason to be excited.

    I look at this Rasp. Pi Pico Cluster and… its not clear to me how this teaches people of the “big boy” supercomputers, or how it’d be more useful than standard multi-core programming. I2C is not the language of high-performance-computers. And Rasp. Pi Pico cannot run Linux or other OSes that’d teach practical administration skills either.


    For the embedded hobbyist, I’d suggest grabbing a Xilinx Zynq FPGA+ARM chip, and experimenting with the high-performance compute available to you from custom FPGAs. That’s how the satellite and military gets large amounts of computational power into small power-envelopes, which is likely why you’d be “interested” in a Rasp. Pi pico in the first place. (Power-constraints due to small Satellites or Military weight restrictions prevent them from using real world supercomputers on RADARs or whatever). You can reach absurdly powerful levels of compute with incredibly low amounts of power-usage with this pattern. (And I presume anyone in the “Microcontrollers” discussion is here because we have an interest in power-constrained computing).

    If Power is not a constraint to you… then you can study up on GPU-programming / Xeon-servers / EPYC-servers / etc. etc. for the big stuff. I am the moderator at https://lemmy.world/c/gpu_programming btw, so we can talk more over there if you’re interested in the lower level GPU-programming details that’d build up to a real supercomputer. The absurd amounts of compute available for just $500 or so today cannot be overstated. An NVidia 4080 or an AMD 7900 XTX have more compute power than entire Cray Supercomputing Clusters of the early 2000s. Learning how to unlock this power is what GPU-programming (CUDA, DirectX, HIP/ROCm, OpenCL) is all about. I’m no expert in how to hook up these clusters together with MPI / Infiniband / etc. etc., but I can at least help you on this subject.