Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

    • vexikron@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      Well, off the top of my head:

      Whole Brain Emulation, attempting to model a human brain as physically accurately as possible inside a computer.

      Genetic Iteration (not the correct term for it but it escapes me at the moment), where you set up a simulated environment for digital actors, then simulate quasi-neurons, quasi-body parts dictated by quasi-dna, in a way that mimics actual biological natural selection and evolution, and then you run the simulation millions of times until your digital creature develops a stable survival strategy.

      Similar approaches to this have been used to do things like teach an AI humanoid how to develop its own winning martial arts style via many many iterations, starting from not even being able to stand up, much less do anything to an opponent.

      Both of these approaches obviously have drawbacks and strengths, and could possibly be successful at far more than what they have achieved to date, or maybe not, due to known or existing problems, but neither of them rely on a training set of essentially the entirety of all content on the internet.

      • HumbleHobo@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        That sounds like a great idea for making an intelligent agent inside a video game, where you control all aspects of it’s environment. But what about an AI that you want to be able to interact with our current shared reality. If I want to know something that involves synthesis of multiple modalities of knowledge how should that information be conveyed? Do humans grow up inside test tubes that only consume content that they themselves have created? Can you imagine the strange society we would have if people were unleashed upon the world without having any shared experiences until they were fully adults?

        I think the OpenAI people have a point here, but I think where they go off the rails is that they expect all of this copyrighted information to be granted to them at zero cost and with zero responsibility to the creators of said content.