𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠

  • 0 Posts
  • 68 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle

  • Except the part where it said downloading videos is against their terms of service? Which was my only point?

    Did you completely fail to read the part “except where authorized”? That bit of legalese is a blanket “you can’t use this software in a way we don’t want to”.

    You physically cannot download files to a browser. A browser is a piece of software. It does not allow you to download anything

    Ah, you just have zero clue what you’re talking about, but you think you do. I can point out exactly where you are on the Dunning-Kruger curve.

    This is such a wild conversation and ridiculous mental gymnastics. I think we’re done here.

    Hilarious coming from you, who has ignored every bit of information people have thrown at you to get you to understand. But agreed, this is not going anywhere.


  • Yes, by allowing you to download the video file to the browser. This snippet of legal terms didn’t really reinforce any of your points.

    But it actually is helpful for mine. In legalese, downloading and storing a file actually falls under reproduction, as this essentially creates an unauthorized copy of the data if not expressly allowed. It’s legally separate from downloading, which is just the act of moving data from one computer to another. Downloading also kind of pedantically necessitates reproduction to the temporary memory of the computer (eg RAM), but this temporary reproduction is most cases allowed (except when it comes to copyrighted material from an illegal source, for example).

    In legalese here, the “downloading” specifically refers to retrieving server data in an unauthorized manner (eg a bot farm downloading videos, or trying to watch a video that’s not supposed to be out yet). Storing this data to file falls under the legal definition of reproduction instead.









  • What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.

    This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

    They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.

    But it’s easy to just define general intelligence as something approximating what humans already do.

    No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.




  • Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

    That’s assuming that we are a general intelligence. I’m actually unsure if that’s even true.

    That doesn’t mean they’ve proven there’s no pathway at all.

    True, they’ve only calculated it’d take perhaps millions of years. Which might be accurate, I’m not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it’s still very unlikely and definitely not “right around the corner” as the AI-bros claim (and that near-future thing is what the paper set out to disprove).




  • The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that’s a billion times faster than what we have now, perfect training data that you can sample without bias and you’re only aiming for an AGI that performs slightly better than chance, it’s still completely infeasible to do within the next few millenia. Ergo, it’s definitely not “right around the corner”. We’re lightyears off still.

    They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what’s even remotely feasible. And that’s provided you don’t even have to deal with all the constraints that exist in the real world.

    We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won’t improve or anything, it’ll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.

    It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we’re not as smart as we think we are either.



  • I actually think this form factor makes more sense than a two-fold. With two-folds you get this weird not-quite-square aspect ratio, but this gives you something much closer to a very typical 16:9. Perfect for watching movies or streaming games.

    I do want a proper android device though, which Huawei can’t provide anymore unfortunately. But I hope other manufacturers try their hand at this form factor. Still, massive props to Huawei on the design and engineering feat, it’s genuinely the first phone I’ve seen in years that made me reconsider what I want in a phone.

    And no, obviously a tri-fold isn’t necessary, but neither are smartphones in general. Took me years to decide to purchase one and I was fine without one (started with a year-old Samsung S7, for reference).