The world's first supercomputer capable of simulating networks at the scale of the human brain has been announced by researchers at Western Sydney University.
That’s also assuming true artificial intelligence isn’t just an emergent property of electrical grids, networks and computers. Like that thing where it seems like computers are listening, maybe they are.
It could be… but electric grids and networks are barely on the billions of nodes scale (at best), with a behavior that restricts them as much as possible to a binary “works / doesn’t work” state, organized in topologies designed to stifle any abnormal behavior… while current LLMs are already on the 100 trillions of parameters scale, each simulating a neuron trigger behavior, organized in topologies to maximize the effects of that behavior.
What could get interesting, is getting a billion smartphones with a neural network of a few billion parameters each, all hooked to a network with just some dumb monkeys standind in the way of full integration. People on the Internet already show emergent behaviors they wouldn’t be showing otherwise; it will get interesting when they get manipulated by more and more complex AIs, trained in turn on their own output post-processed by people.
Best case scenario, we’re going towards a tighter integration between humans and machines.
BTW, the premise for the original pre-production script for The Matrix, was that the machines used humans as neural processing nodes; that’s why Neo could gain access to and control the machines, because all humans had the machines’ code inside them, just needed the exploits/bugs to access it. They dumbed it down to “humans are batteries” in the final version, because 25 years ago they thought the audiences wouldn’t get it (and might’ve been right). But now we can see that who’s whose auxiliary neural processor, might change over the next couple decades.
That’s also assuming true artificial intelligence isn’t just an emergent property of electrical grids, networks and computers. Like that thing where it seems like computers are listening, maybe they are.
It could be… but electric grids and networks are barely on the billions of nodes scale (at best), with a behavior that restricts them as much as possible to a binary “works / doesn’t work” state, organized in topologies designed to stifle any abnormal behavior… while current LLMs are already on the 100 trillions of parameters scale, each simulating a neuron trigger behavior, organized in topologies to maximize the effects of that behavior.
What could get interesting, is getting a billion smartphones with a neural network of a few billion parameters each, all hooked to a network with just some dumb monkeys standind in the way of full integration. People on the Internet already show emergent behaviors they wouldn’t be showing otherwise; it will get interesting when they get manipulated by more and more complex AIs, trained in turn on their own output post-processed by people.
Best case scenario, we’re going towards a tighter integration between humans and machines.
BTW, the premise for the original pre-production script for The Matrix, was that the machines used humans as neural processing nodes; that’s why Neo could gain access to and control the machines, because all humans had the machines’ code inside them, just needed the exploits/bugs to access it. They dumbed it down to “humans are batteries” in the final version, because 25 years ago they thought the audiences wouldn’t get it (and might’ve been right). But now we can see that who’s whose auxiliary neural processor, might change over the next couple decades.