Was surprised to see threads take up federation quicker then bluesky, especially since they needed it the least.
Was surprised to see threads take up federation quicker then bluesky, especially since they needed it the least.
Yeah but this is a “needle in a haystack” problem that chatgpt and AI in general are actually very useful for, ie. Solutions that are hard to find but easy to verify. Issues like this are hard to find as it requires combing through your code , config files and documentation to find the problem, but once you find the solution it either works or it doesn’t.
It is a different level of scale, mastodon has about 1 million users spread over a bunch of instances. Threads has over 200 million users on one instance. also due to the network nature of social media the amount of connections and messages sent through those connections can scale exponentially with the amount of users.
By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. … Inside, we found the latest versions of the Cellebrite software, a hardware dongle designed to prevent piracy (tells you something about their customers I guess!), and a bizarrely large number of cable adapters.
I guess it makes sense signal works with the mafia
This would be true if chomskys claim was that he was simply studying human language acquisition and that machines are different, but his claim was that machines can’t learn human languages because they don’t have some intuitive innate grammar.
Saying an llm hasn’t learned language becomes harder and harder the more you talk to it and the more it starts walking like a duck and quacking like a duck. To make that claim you’ll need some evidence to counter the demonstrable understanding the llm displays. Chomsky in his nytimes response just gives his own unprovable theories on innate grammar and some examples of questions llms “can’t answer” but if you actually ask any modern llm they answer them fine.
You can define “learning” and “understanding” in a way that excludes llms but you’ll end up relying upon unprovable abstract theories until you can come up with an example of a question/prompt that any human would answer correctly and llms won’t to demonstrate that difference. I have yet to see any such examples. There’s plenty of evidence of them hallucinating when they reach the edge of their understanding, but that is something humans do as well.
Chomsky is still a very important figure and his work on politics with manufacturing consent is just as relevant as when it was written over 20 years ago. His work on language though is on shaky grounds and llms have made it even shakier.
Renewables work, you just have to build batteries to store it, which California has been doing and is part of the reason energy is cheaper there. There is more hydro too, but solar into batteries has been overtaking it at peak recently.
The rape narrative has been massively overblown with little evidence backing it up besides racial bias against brown men. Did some women get sexualy assaulted, probably, was it widespread and systemic like the media is pushing, probably not. This isn’t to excuse the disgusting behavior of hamas on October 7th, just saying it’s more likely this guy sexualy assaulted Palestinians in an Israeli jail then his girlfriend being assaulted.
Just cause a member of a group did something bad to you doesn’t give you the right to abuse that group back. If that guy’s girlfriend was sexualy assaulted by a black person that doesn’t give him the right to yell slurs and push around black people on the street. This guy wasn’t yelling about hamas he was yelling at the idea of Palestine, that’s just straight up racism.