• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 months ago

    Artificial NNs are simulations (not “abstractions”) of animal, and human, neural networks… so, by definition, humans are not more than a neural network.

    simple number 0 through 1

    Not how it works.

    Animal neurons respond as a clamping function, with a constant 0 output up to some threshold, where they start outputting neurotransmitters as a function of the input values. Artificial NNs have been able to simulate that for a while.

    Still, for a long time it used to be thought that copying the human connectome and simulating it, would be required to start showing human-like behaviors.

    Then, some big surprises came from a few realizations:

    1. You don’t need to simulate the neurons, just the relationship between inputs and outputs (each one can be seen as the level of some neurotransmitter in some synapse).
    2. A grid of values, can represent the connections of more neurons than you might think (most neurons are not connected to most others, the neurotransmitters don’t travel too far, they get reabsorbed, and so on).
    3. You don’t need to think “too much” about the structure of the network; add a few extra trillion connections to a relatively simple stack, and the network can start passing the Turing test.
    4. The values don’t need to be 16bit floats, NNs quantized to as little as 4bit (0 through 16) can still show pretty much the same behavior.

    There are still a couple things to tackle:

    1. The lifetime of a neurotransmitter in a synapse.
    2. Neuroplasticity.

    The first one is kind of getting solved by attention heads and self-reflection, but I’d imagine adding extra layers that “surface” deeper states into shallower ones, might be a closer approach.

    The second one… right now we have LoRAs, which are more like psychedelics or psychoactive drugs, working in a “bulk” kind of way… with surprisingly good results, but still.

    Where it really will start getting solved, is with massive scale neuromorphic hardware accelerators the size of a 1TB microSD card (proof of concept is already here: https://www.science.org/doi/10.1126/science.ade3483 ), which could cut down training times by 10 orders of magnitude. Shoving those into a billion smartphones, then into some humanoid robots, is when the NN age will really get started.

    Whether that’s going to take more or less than 5 years, it’s hard to say, but surely everyone is trying as hard as possible to make it less.

    TL;DR: we haven’t seen nothing yet.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      by definition, humans are not more than a neural network.

      Imma stop you right there

      What’s the neural net that implements storing and retrieving a specific memory within the neural net after being exposed to it once?

      Remember, you said not more than a neural net – anything you add to the neural net to make that happen shouldn’t be needed, because humans can do it, and they’re not more than a neural net.

    • noxfriend@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      We don’t even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat’s brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).

      I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks’ enlightening writing like Elephants Don’t Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        10 months ago

        Where did I exaggerate anything?

        We don’t even know what consciousness or sentience is, or how the brain really works.

        We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.

        It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

        trying to accurately simulate a rat’s brain have not brought us much closer

        There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.

        It’s kind of a brute force approach, but the results speak for themselves.

        the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

        I’m afraid the “state of the art” in 2020, was not the same as the “state of the art” in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.