• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 months ago

    Where did I exaggerate anything?

    We don’t even know what consciousness or sentience is, or how the brain really works.

    We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.

    It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

    trying to accurately simulate a rat’s brain have not brought us much closer

    There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.

    It’s kind of a brute force approach, but the results speak for themselves.

    the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

    I’m afraid the “state of the art” in 2020, was not the same as the “state of the art” in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.