The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a “secret elite” list of people for whom Facebook’s rules don’t apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn’t do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.
It’s clear which side Sorkin is taking. “I blame Facebook for January 6,” he said last year. “Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement … There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth.”
tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement
But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell’s basement. Despite two of three modern social media experience models being too aimed for that, that’d be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there’s less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.
(Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don’t like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The “posting to a namespace versus posting to an owned community” dichotomy is important. The latter causes a “capture the field” reaction from humans.)
…And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.
I do miss the original imageboards though that used sage and was a community driven effort into moderation.
The mod ego problem will exist as long as there’s moderation, unfortunately.
It was present in the web even before it was expelled from heaven.
But it’s not necessary to remove all moderation, just global identifiers of posts and many different “moderating projections” of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone’s mouth matters. If the only way you see a post is without such at all - then maybe it’s too rude. If it’s removed on the instance level on most of instances - then maybe it’s something really nasty that shouldn’t be seen. But if in some projection it’s visible and in some not - then we’ve solved this particular problem.
Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there’s no difference between those groups that preach ‘everything inclusive (except what we don’t like)’ and those who are clearly extreme and have their own biases. The irony of freespeech is you’re going to hear things you don’t agree with, and that’s fine.
Let’s fucking go
@[email protected] also relevant a Meta whistleblower testifying in front of Congress
Your link is borked. Here’s a fixed version: https://www.c-span.org/program/senate-committee/meta-whistleblower-testifies-on-facebook-practices/658354
@Badabinski @sbv @maam @ryanee @ryanee @ryanee thanks
deleted by creator
But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell’s basement. Despite two of three modern social media experience models being too aimed for that, that’d be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there’s less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.
(Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don’t like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The “posting to a namespace versus posting to an owned community” dichotomy is important. The latter causes a “capture the field” reaction from humans.)
…And under the current model, the egos of mods get crazy big as they see their
communityarmy grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.I do miss the original imageboards though that used
sage
and was a community driven effort into moderation.The mod ego problem will exist as long as there’s moderation, unfortunately.
It was present in the web even before it was expelled from heaven.
But it’s not necessary to remove all moderation, just global identifiers of posts and many different “moderating projections” of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone’s mouth matters. If the only way you see a post is without such at all - then maybe it’s too rude. If it’s removed on the instance level on most of instances - then maybe it’s something really nasty that shouldn’t be seen. But if in some projection it’s visible and in some not - then we’ve solved this particular problem.
In such a hypothetical system.
Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there’s no difference between those groups that preach ‘everything inclusive (except what we don’t like)’ and those who are clearly extreme and have their own biases. The irony of freespeech is you’re going to hear things you don’t agree with, and that’s fine.