• 0 Posts
  • 7 Comments
Joined 8 months ago
cake
Cake day: March 1st, 2024

help-circle
    1. The rape narrative has been massively overblown with little evidence backing it up besides racial bias against brown men. Did some women get sexualy assaulted, probably, was it widespread and systemic like the media is pushing, probably not. This isn’t to excuse the disgusting behavior of hamas on October 7th, just saying it’s more likely this guy sexualy assaulted Palestinians in an Israeli jail then his girlfriend being assaulted.

    2. Just cause a member of a group did something bad to you doesn’t give you the right to abuse that group back. If that guy’s girlfriend was sexualy assaulted by a black person that doesn’t give him the right to yell slurs and push around black people on the street. This guy wasn’t yelling about hamas he was yelling at the idea of Palestine, that’s just straight up racism.



  • Not_mikey@slrpnk.nettoSelfhosted@lemmy.world2real4me
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    1 month ago

    Yeah but this is a “needle in a haystack” problem that chatgpt and AI in general are actually very useful for, ie. Solutions that are hard to find but easy to verify. Issues like this are hard to find as it requires combing through your code , config files and documentation to find the problem, but once you find the solution it either works or it doesn’t.




  • This would be true if chomskys claim was that he was simply studying human language acquisition and that machines are different, but his claim was that machines can’t learn human languages because they don’t have some intuitive innate grammar.

    Saying an llm hasn’t learned language becomes harder and harder the more you talk to it and the more it starts walking like a duck and quacking like a duck. To make that claim you’ll need some evidence to counter the demonstrable understanding the llm displays. Chomsky in his nytimes response just gives his own unprovable theories on innate grammar and some examples of questions llms “can’t answer” but if you actually ask any modern llm they answer them fine.

    You can define “learning” and “understanding” in a way that excludes llms but you’ll end up relying upon unprovable abstract theories until you can come up with an example of a question/prompt that any human would answer correctly and llms won’t to demonstrate that difference. I have yet to see any such examples. There’s plenty of evidence of them hallucinating when they reach the edge of their understanding, but that is something humans do as well.

    Chomsky is still a very important figure and his work on politics with manufacturing consent is just as relevant as when it was written over 20 years ago. His work on language though is on shaky grounds and llms have made it even shakier.