The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 3 Posts
  • 222 Comments
Joined 10 months ago
cake
Cake day: January 12th, 2024

help-circle

  • Bots are parasites: they only thrive if the host population is large enough to maintain them. Once the hosts are gone, the parasites are gone too.

    In other words: botters only bot a platform when they expect human beings to see and interact with the output of their bots. As such they can never become the majority: once they do, botting there becomes pointless.

    That applies even to repost bots - you could have other bots upvoting the repost, but you won’t do it unless you can sell the account to an advertiser, and the advertiser will only buy it if they can “reach” an “audience” (i.e. spam humans).




  • I’m not expecting a big exodus, but rather a slow decline in both the number of users and their engagement. With a few peaks here and there that seem to revert the downwards trend, but each peak being smaller than the one before.

    They won’t be leaving for the same reason as most people here did, pissed at the IPO-related changes (such as killing 3rd party apps). It’ll be more like “…meh, why would I check Reddit? There’s better stuff elsewhere.” We can already see the decline of the content quality in Reddit now; it’ll get only worse over time.

    I think that most will end in Discord. Some in Bluesky, and some will simply touch grass. Conservatives might end in Minitrue “truth social” or crap like that.

    Facebook might perhaps absorb some of the former Reddit users. It feels disgusting for the privacy conscious, but for them it’ll be a simple matter of not finding interesting stuff in Reddit.

    The same applies to Reddit’s liquid profit - for now, that value extraction still creates a small peak on raw profit, to the point that the bottom line became positive; later on the peak will barely reach the surface; later on, value extraction will be necessary to avoid making the bottom line too negative.




  • I fucked it up and switched the terms, sorry. Look for “value extraction” instead; you’ll find multiple references to the concept such as this or Mazzucato’s “The Value of Everything”.

    To keep it short: you create value when you produce desirable goods/services for the customers; however, when you extract it, you’re picking the value that was already created (by society, your customers, or even your own business) and turning it into profit. The later is faster but unsustainable, as that value doesn’t pop up from nowhere, so when a business shifts from value creation to value extraction it’ll get some quick cash and then go kaboom.

    In Reddit’s case, this value is mostly users willing to generate, curate, and share content with the platform, and other users knowing this:

    • someone recommends you a product/brand. The person might be wrong, but you were reasonably sure that they aren’t a corporation astroturfing their own product. Someone else might criticise it instead.
    • you hop into your favourite subreddit and, while the content there isn’t the best, it’s still good enough - because the mods gave some fucks about growing their subreddits;
    • you discuss some controversial topic. You might get dogpiled, but at least you know that the dogs piling you are human beings, that sometimes might listen to reason; a bot will never;
    • et cetera.

    All that value was being slowly extracted through the last years, but the changes in 2023/2024 did it the hardest.


  • As I often mention in other communities, this smells like value exploitation extraction* from a distance. Value exploitation extraction typically generates a peak of profit in the short term, but it makes losses even harsher in the long run.

    As such I don’t think that Reddit is getting “bigger”. That profit is like someone who lives in a wooden house, dismantling their own home to sell it as lumber; of course they’ll get some quick cash, but it’s still a bad idea.

    In a letter to shareholders, Reddit CEO Steve Huffman attributed the recent increase in users to the platform’s AI-powered translation feature.

    Let’s pretend for a moment that we can totally trust Huffman’s claim here. Even human translations often get some issues, as nuances and whatnots are not translated, and this generates petty fights, specially in a younger userbase like Reddit’s; with AI tendency to hallucinate, that gets way worse. And even if that was not an issue, a lot of content is simply irrelevant for people outside a certain regional demographic.

    *EDIT REASON: I switched the terms, sorry. (C’mon, I’m L3.)


  • Kind of. @storksforlegs@beehaw.org is right that journalistic standards prevent too much meddling. Plus commercial news defending interests have a better resource for manipulation - instead of lying, they pick which true pieces of info to release as relevant, and paint them one or another way.

    For example. Let’s say that Alice insults Bob, and Bob slaps Alice in return. Someone defending Alice would say that she was the victim of aggression, while someone defending Bob would say that he reacted to Alice’s verbal abuse. Neither is false, but they don’t get the full picture. While LLM/A"I" style bullshit be saying instead “Alice picked a puppy and beat it to death with Bob’s face”.



  • I like this piece. Well-thought, and well laid out.

    I do believe that mods getting weathered, as OP outlined, is part of the issue. I’m not sure on good ways to solve this, but introducing a few barriers of entry here and there might alleviate it. We just need to be sure that those barriers actually sort good newbies in and bad newbies out, instead of simply locking everyone out. Easier said than done.

    Another factor is that moderator work grows faster than community size; you get more threads, each with more activity, users spend more time in your community, they’re from more diverse backgrounds so more likely to disagree, forest fires spread faster so goes on. This is relevant here because communities nowadays tend to be considerably bigger than in the past; and, well, when you got more stuff to do, you tend to do things in a sloppier way.

    You can recruit more mods, of course; but mod team size is also a problem, as it’s harder to get everyone in the same page and enforce rules consistently. If one mod is rather lax and another is strict, you get some people getting away doing worse than someone else who got banned, and that makes the whole mod team look powertripping and picking favourites, when it isn’t. (I’m not sure on how to solve this problem besides encouraging people to migrate to smaller communities, once they feel like the ones that they are in are too big.)


  • I think that it would be theoretically possible with a modified client. But in practice you’d filter a lot of genuinely active users out, and still let a lot of those suspicious accounts in. Sadly I think that blocking them individually is a better approach, even if a bit more laborious.

    On a lighter note, this sort of user isn’t a big deal here in Lemmy. It’s simply more efficient to manipulate a larger userbase, like Twitter or Reddit.


  • That’s a great analogy. And a fair point - it got burrowed, but it’s still there.

    At least when we deal with individuals using the platform. The platform is still listening to you, and sharing it with advertisers; that’s the whole model behind Meta (WhatsApp) and Snapchat. They’re still hearing you, and want to talk with you (shhh, I’ve heard you bought [product]? Here are some offers for even more [product]!), regardless of what you want.


  • The whole “one individual talking to another” aspect of the internet of the 00s is gone. It feels more and more like an “everyone is talking to you and hearing you, like it or not”. Facebook is only an example of that - and even if it didn’t enshittify, I find unlikely that it would’ve kept that aspect.

    I also wonder if my experiences with Orkut wouldn’t be similar to the ones of the author with FB, if only Google didn’t kill Orkut. (It was a big thing here.)


  • 2:10 “I assumed that, if I couldn’t beat the system, there was no point on whatever I was doing”: that’s the old nirvana fallacy. The rest of the video is about dismantling it for the individual, and boils down to identifying who you’re trying to protect yourself against (threat model), compromising, etc.

    It’s relevant to note that each tiny bit of privacy that you can get against a certain threat helps - specially if it’s big tech, as the video maker focuses on. It gives big tech less room to manipulate you, and black hats less info to haunt you after you read that corporate apology saying “We are sorry. We take user safety seriously. Today we had a breach […]”.

    And on a social level, every single small action towards privacy that you do:

    • makes obtaining personal data slightly more expensive thus slightly less attractive
    • supports a tiny bit more alternatives that respect your privacy
    • normalises seeking privacy a tiny bit more

    and so goes on. Seeking your own privacy helps to build a slightly more private world for you and for the others, even if you don’t get the full package.


  • what can we do?

    The link itself offers a good first step: Stallman himself should be encouraged to step down, and if he doesn’t the FSF should remove him from its board.

    Furthermore we should be backing up both things and, in their failure, backing up a competing entity.

    This should be done in a subtle way, though - without causing unnecessary drama. I know, easier said than done.

    A silver lining on everything here is that his saner views are likely to be backed up by other people in the libre software movement.



  • There are a few things that Stallman really does not get.

    1. Power over an individual reduces their ability to consent, and adults have considerable power over teens.
    2. The discussion about having those teens accessing pornography should be handled separately. It’s simply not the same matter.
    3. Pornography and nudity are not the same thing.
    4. No matter how bad witch hunters are, this should not be used as a defence for the alleged target of their witch hunts.
    5. “Normal” or “natural” are not the same as “should be taken as morally, ethically, or legally acceptable”.

    Once you take those things into account, you notice that most of the things that Stallman talks about the topic aren’t just immoral, they’re outright idiotic.