• Overzeetop@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I sat in a room of probably 400 engineers last spring and they all laughed and jeered when the presenter asked if AI could replace them. With the right framework and dataset, ML almost certainly could replace about 2/3 of the people there; I know the work they do (I’m one of them) and the bulk of my time is spent recreating documentation using 2-3 computer programs to facilitate calculations and looking up and applying manufacturer’s data to the situation. Mine is an industry of high repeatability and the human judgement part is, at most, 10% of the job.

    Here’s the real problem. The people who will be fully automatable are those with less than 10 years experience. They’re the ones doing the day to day layout and design, and their work is monitored, guided, and checked by an experienced senior engineer to catch their mistakes. Replacing all of those people with AI will save a ton of money, right up until all of the senior engineers retire. In a system which maximizes corporate/partner profit, that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough), and yet there will still be a substantial fraction of oversight that will be needed. Unfortunately, ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight. That may not matter too much for marketing art departments, but for structural engineers it’s going to result in a safety or reliability issue for society as a whole. And since failures in my profession only occur in marginal situations (high loads - wind, snow, rain, mass gatherings) my suspicion is that it will be decades before we really find out that we’ve been whistling through the graveyard.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough)

      Anything a human can be trained to do, a neural network can be trained to do.

      Yes, there will be a lack of trained humans for those positions… but spinning up enough “senior engineers” will be as easy as moving a slider on a cloud computing interface… or remote API… done by whichever NN comes to replace the people from HR.

      ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight

      Cue in the humanoid robots.

      Better yet: outsource the creation of “qualified oversight”, and just download/subscribe to some when needed.

      • mozz@mbin.grits.devOP
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        Anything a human can be trained to do, a neural network can be trained to do.

        Citation needed

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Humans are neural networks… you can cite me on that.

          (Notice I didn’t say anything about the complexity, structure, or fundamental functioning of a human neural network. All points to modern artificial NNs being somewhat on a tangent to humans… but also that there is some overlap already, and that it can be increased)

          • mozz@mbin.grits.devOP
            link
            fedilink
            arrow-up
            0
            ·
            10 months ago

            Humans are a lot more than the mathematical abstraction that is a neural network.

            You could say that you believe that any computational task that a human brain can accomplish, a neural network can also accomplish (simply assuming that all of the higher-level structures, different parts of the brain allocated to particular tasks, the way it encodes and interacts with memories and absorbs new skills, variety of chemical signals which communicate more than a simple number 0 through 1 being sent through each neuron-to-neuron connection, is abstractable within the mathematical construct of a neural network in some doable way). But that’s (a) not at all obvious to me (b) not at all the same as simply asserting that we’ve got it all tackled now that we can do some great stuff with neural networks © not implying anything at all about how soon it’ll happen (i.e. could take 5 years, or 500, although my feeling is probably on the shorter side as well).

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              10 months ago

              Artificial NNs are simulations (not “abstractions”) of animal, and human, neural networks… so, by definition, humans are not more than a neural network.

              simple number 0 through 1

              Not how it works.

              Animal neurons respond as a clamping function, with a constant 0 output up to some threshold, where they start outputting neurotransmitters as a function of the input values. Artificial NNs have been able to simulate that for a while.

              Still, for a long time it used to be thought that copying the human connectome and simulating it, would be required to start showing human-like behaviors.

              Then, some big surprises came from a few realizations:

              1. You don’t need to simulate the neurons, just the relationship between inputs and outputs (each one can be seen as the level of some neurotransmitter in some synapse).
              2. A grid of values, can represent the connections of more neurons than you might think (most neurons are not connected to most others, the neurotransmitters don’t travel too far, they get reabsorbed, and so on).
              3. You don’t need to think “too much” about the structure of the network; add a few extra trillion connections to a relatively simple stack, and the network can start passing the Turing test.
              4. The values don’t need to be 16bit floats, NNs quantized to as little as 4bit (0 through 16) can still show pretty much the same behavior.

              There are still a couple things to tackle:

              1. The lifetime of a neurotransmitter in a synapse.
              2. Neuroplasticity.

              The first one is kind of getting solved by attention heads and self-reflection, but I’d imagine adding extra layers that “surface” deeper states into shallower ones, might be a closer approach.

              The second one… right now we have LoRAs, which are more like psychedelics or psychoactive drugs, working in a “bulk” kind of way… with surprisingly good results, but still.

              Where it really will start getting solved, is with massive scale neuromorphic hardware accelerators the size of a 1TB microSD card (proof of concept is already here: https://www.science.org/doi/10.1126/science.ade3483 ), which could cut down training times by 10 orders of magnitude. Shoving those into a billion smartphones, then into some humanoid robots, is when the NN age will really get started.

              Whether that’s going to take more or less than 5 years, it’s hard to say, but surely everyone is trying as hard as possible to make it less.

              TL;DR: we haven’t seen nothing yet.

              • mozz@mbin.grits.devOP
                link
                fedilink
                arrow-up
                0
                ·
                10 months ago

                by definition, humans are not more than a neural network.

                Imma stop you right there

                What’s the neural net that implements storing and retrieving a specific memory within the neural net after being exposed to it once?

                Remember, you said not more than a neural net – anything you add to the neural net to make that happen shouldn’t be needed, because humans can do it, and they’re not more than a neural net.

              • noxfriend@beehaw.org
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                10 months ago

                We don’t even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat’s brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).

                I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks’ enlightening writing like Elephants Don’t Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

                • jarfil@beehaw.org
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  10 months ago

                  Where did I exaggerate anything?

                  We don’t even know what consciousness or sentience is, or how the brain really works.

                  We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.

                  It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

                  trying to accurately simulate a rat’s brain have not brought us much closer

                  There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.

                  It’s kind of a brute force approach, but the results speak for themselves.

                  the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

                  I’m afraid the “state of the art” in 2020, was not the same as the “state of the art” in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.

      • noxfriend@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        Anything a human can be trained to do, a neural network can be trained to do.

        Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today’s neural nets as a space shuttle is to a paper airoplane.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Try getting them to even open a door

          For now there is: AI vs. Stairs, you may need to wait for a future video for “AI vs. Doors” 🤷

          BTW, that is a rudimentary neural network.

          • noxfriend@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            10 months ago

            I’ve seen a million of such demos but simulations like these are nothing like the real world. Moravec’s paradox will make neural nets look like toddlers for a long time to come yet.

      • Overzeetop@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        I’m assuming you’re being facetious. If not…well, you’re on the cutting edge of MBA learning.

        There are still some things that just don’t get into books, or drawings, or written content. It’s one of the drawbacks humans have - we keep some things out our brains that just never make it to paper. I say this as someone who has encountered conditions in the field that have no literature on the effect. In the niches and corners of any practical field there are just a few people who do certain types of work, and some of them never write down their experiences. It’s frustrating as a human doing the work, but it would not necessarily be so to a ML assistant unless there is a new ability to understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge. More importantly, it needs the operators holding the purse to approve that expenditure, trusting that the ML output is correct and not asking it to extrapolate in lieu of testing. Will AI/ML be there in 20 years to pick up the slack and put it’s digital foot down stubbornly and point out that lives are at risk? Even as a proponent of ML/AI, I’m not convinced that kind of output is likley - or even desired by the owners and users of the technology.

        I think AI/ML can reduce errors and save lives. I also think it is limited in the scope of risk assessment where there are no documented conditions on which to extrapolate failure mechanisms. Heck, humans are bad at that, too - but maybe more cautious/less confident and aware of such caution/confidence. At least for the foreseeable future.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Yeah. This is something that to me isn’t getting enough attention in the whole conversation. I’m trying to get myself up to speed on how to code effectively with AI tools, but I feel like understanding the code at a deep level is required in order to be able to do that effectively.

      In the future, I think the “earning” that gives you that type of knowledge won’t be something that people are forced to go through anymore, because AI can do the simple stuff for them, and so the inevitable result is that very few people will be able to do more than rely on the AI tools to either get it right or not, because they don’t understand the underlying systems. I’m honestly not sure what future is in store a couple generations from now other than most people being forced to trust the AI (whatever its capabilities or incapabilities are at that point). That doesn’t sound like a good scenario.

      • Overzeetop@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        The future is already here. This will sound like some old man yelling at clouds, but the tools available for advanced structural design (automatic environmental loading, finite element modeling) are used by young engineers as magical black boxes which spit out answers. That’s little different than 30 years ago when the generation before me would complain that calculators, unlike sliderules, were so disconnected from the problem that you could put in two numbers, hit the wrong operation, and get a non-sensical answer but believe it to be correct because the calculator told you so.

        This evolution is no different, it’s just that the process of design (wither programming or structures or medical evaluation) will be further along before someone realizes that everything that’s being offered is utter shit. I’m actually excited about the prospect of AI/ML, but it still needs to be handled like a tool. Modern machinery can do amazing things faster, and with higher precision, than hand tools - but when things go sideways they can also destroy things much quicker and with far greater damage.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          old man yelling at clouds

          My turn.

          Almost 30 years ago, in sunny Spain, a friend of mine was studying to become an Electrical Engineer. Among the things he told me would be under his responsibility, would be approving the plans for industrial buildings. “So your curriculum includes some architecture?”, I asked. “No need”, he responded, “you just put the numbers into a program and it spits out all that’s needed”.

          Fast forward to 2006, when an industrial hall in Poland, built by a Spanish company, and turned into a disco, succumbed under the weight of snow on its roof, killing 65 people.

          Wonder if someone forgot to check the “it snows in winter” option… 🙄