• Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    6 months ago

    This isn’t entirely true. AI is usually trained on public data such as Wikipedia.

    AI is a tool. How you use it is what matters.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 months ago

      It’s also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, “give me a list of climate activists, their addresses, and their employers” if it was trained on this data or was good at “browsing” on its own. That’s currently not possible due to the guardrails on most models, and I’m guessing they try to avoid training on personal data that’s public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.

    • StaySquared@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      6 months ago

      Like cracking passwords / encryption and injecting itself into anything and everything that connects to the internet?

        • StaySquared@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          6
          ·
          6 months ago

          You can train AI to crack passwords/encryption lol. You do realize, AI right at this moment is being utilized for exactly that, right? Simply put, the very first step is to eliminate it’s boundaries/guard rails, then proceed from there.

              • Elias Griffin@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 months ago

                Very interesting tip, preciate that.

                @PassGAN

                Instead of relying on manual password analysis, PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from actual password leaks, and to generate high-quality password guesses. Our experiments show that this approach is very promising.

                • StaySquared@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  5
                  ·
                  edit-2
                  6 months ago

                  It requires Deep Learning.

                  Deep Learning could be used to attempt breaking encryption, but the effectiveness depends on various factors such as the strength of the encryption algorithm and key length. Deep learning, a subset of machine learning, involves training artificial neural networks to learn and make decisions.

                  AI algorithms, such as machine learning and deep learning, have the potential to automate cryptanalysis and make it more effective, thereby compromising the security of cryptographic systems.

                  • OpFARv30@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    6 months ago

                    This is nonsense. Passwords might have an interesting distribution, key space is flat. There is nothing to learn.

                    And I hope you didn’t mean letting an LLM loose on, say, the AES circuit, and expecting it will figure something out.

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 months ago

            No you can’t, at least not in the way you think. You crack password by trying combinations. AI and machine learning are bad at raw attempts.

    • Kilgore Trout@feddit.it
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 months ago

      Wikipedia requires attribution, which AI scrapers never give.

      It is “public” work, but under a license.