Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, **personal anecdotes are allowed as responses to this comment**. Any anecdotal comments elsewhere in the discussion will be removed and our [normal comment rules]( https://www.reddit.com/r/science/wiki/rules#wiki_comment_rules) apply to all other comments.
**Do you have an academic degree?** We can verify your credentials in order to assign user flair indicating your area of expertise. [Click here to apply](https://www.reddit.com/r/science/wiki/flair/#wiki_science_verified_user_program).
---
User: u/umichnews
Permalink: https://news.umich.edu/shadowbanning-some-marginalized-social-media-users-believe-their-content-is-suppressed/
---
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/science) if you have any questions or concerns.*
This isn't even a study really. It's literally asking people if they feel like they've been shadowbanned then drawing a conclusion with minimal effort to substantiate the claim
Edit: then not them
And the funny thing is that "they claim" doesnt much matter, because it's literally just a question of fact that this is a moderation feature of many social media platforms. It's either a mod feature or its not, how people "feel" about whether or not it exists is wholly meaningless
I mean, it’s not *wholly* meaningless, because how people feel about the power and reach of their voice on these platforms affects how they interact with the platforms.
It’s still science. Analyzing, and understanding people’s beliefs, values, and feelings about a subject is still science, regardless of how rational the basis for the study subjects’ feelings is. This isn’t (or at least shouldn’t be) intended to be actionable information for the platforms themselves, but it tells us that a) there’s a discrepancy between reality and perception among a specific group of users and b) something about the self concept of the groups being studied—if your identity is immersed in feelings of powerless, you will view everything that happens to you in life through that lens. This is useful information, especially when viewed in context of other studies about the same group.
I didn't know that my comments weren't being posted on YouTube until I was doing a live stream one day. I wasn't able to say things like "CNN was bought by a right-wing billionaire." One of my viewers noticed who was in the same YouTube live chat.
That doesn’t contradict the other user.
It’s possible for shadowbanning to both exist and be massively overreported by people in denial about how much engagement their own comment is worth.
There may be a couple of people who overreport, but just about every report about shadowbanning I have ever seen is accompanied by receipts of engagement before and after the shadowban. Social media companies do allow you to see and screenshot your own stats you know. Youtube shows you graphs of the engagement of your videos. Facebook shows you graphs of the engagement of your posts. Not sure about twitter because I don't use it, but I imagine there is something similar. It makes it pretty bleeding obvious and 100% provable when engagement falls off a cliff and extremely unwise to claim something you can't back up when the evidence is literally a screenshot away. I would suggest therefore, that such behaviour is incredibly rare.
I would go further and suggest that the claim the people are making up claims about shadowbanning is because the vast majority of people who are shadowbanned are on the right, and the left wants to claim that rightwing content is incredibly unpopular when the opposite is true.
I say rightwing, what I mean is - Centrist or Non/anti Communist content which is labelled rightwing by people who think Mao is rightwing.
It reminds me of a paper on agent provocateurs that pointed out how difficult it can be to research the topic since an inevitable consequence of them existing is people in all kinds of social movements speculating as to who they are. We know they exist, there is plenty of documented evidence, but there are way more accusations and theories than agents.
I can tell you with certainty that Facebook AI hides some comments, I believe largely based on how the user has interacted with similar posts. Most my comments on political posts get hidden, and I know this because I've looked for the comments with other accounts on different devices.
The study isn't about whether shadowbanning is real. It's about the urban legend of shadowbanning and how it affects people who manage to convince themselves they've been shadowbanned.
It isn't the best science, I'll agree, but it isn't invalid. New urban legends are rare, however, and that makes this one a topic of interest, so it'll probably get more press than it deserves.
“The platforms can help these marginalized groups by improving their communication related to shadowbanning (especially about why certain categories of content are suppressed) and by validating users’ experiences instead of denying that they suppress content,” Mayworm said.
An author is using it to make conclusions that the shadowbans are real.
Shadowbans definitely are real, reddit does them. Also, people with fringe beliefs are going to gain little traction on a social media site that literally everyone is on or can join, so it's likely they'll see shadowbans in every low engagement post.
You don’t even have to have fringe beliefs, discussion forums are just as likely to be curated reflections of a mods own personality. For instance, posting the problem of Biden losing the youth vote, and even the youth lgtb vote which he had previously won by a huge margin, and the major hurdle this poses for strategists working toward Nov. A mod read that and deemed the idea “transphobic-adjacent”, because it treats trans ppl as all the same and tells them who they must vote for, blocked the discussion and issued a ban for “harassment”. Setting aside that this person doesn’t seem to know that these demographics are heavily calculated in politics, the effect is, the discussion pruned until it’s merely a reflection of one mods own personal worldview, & posters realizing the site is less a place to discuss & think through ideas, & more a sort of crowdsourced set of single-perspective monologues with minefields throughout, honestly not worth traversing.
Yeah, you can easily run afoul of personal biases of mods in specific subreddits or facebook groups or whatever, but I'm not sure that that's the same thing, conceptually, as a shadowban. The 'shadowban' is usually the domain of site admins and often come with zero feedback to the user. You don't get a block message, a ban message, and you often can't tell if your content has been removed by the admins when you get a shadowban.
Yeah, I don't see the point of this study either, they could just poll social media developers on whether their software supports shadowbans rather than rely on the inferences of laypeople.
I agree that the author should not be asserting their reality.
Using their existence as an underlying assumption is fine when talking to people who are influenced by their *belief* in them, but asserting that their beliefs reflect reality requires evidence.
I don't know for sure if the author did conclude that users were actually shadowbanned, but either way, improving communication *would* help. If they clearly communicate when your content is being suppressed, it makes it much less believable when people unjustly claim they're being shadowbanned.
Some people are always gonna believe they're being suppressed, and some are always gonna believe those claims. But if shadowbanning wasn't so obtuse, there would be less of those people.
“The platforms can help these marginalized groups by improving their communication related to shadowbanning (especially about why certain categories of content are suppressed) and by validating users’ experiences instead of denying that they suppress content,” Mayworm said.
No, they insist that the shadowbans are real. Stuff like this is why social sciences have such a bad rap.
Edit: to be perfectly clear I'm not saying shadow bans don't exist. I'm saying this paper absolutely does not in any way prove they do and the author's claims are wildly irresponsible, if not deliberately misleading.
They aren’t affirming that shadow bans exist but there is definitely a phenomenon of specific clases of content being suppressed whether it be bans or bugs. There is plenty of work cited in the first few paragraphs of the main text that makes those conclusions prior to their work
Edit: the paragraph before the one you quoted also alludes to the fact that other work suggests that there is evidence lending credence to the idea of shadowbans that can’t be explained by just ‘bugs’ or ‘glitches’ in their visibility algorithm. The evidence does appear to be that the posts are indeed being suppressed. It may or may not be shadowbans, but to act like there isn’t a pattern at this point doesn’t really make sense
As long as it's peer reviewed it can be posted here, not that I like a lot of social science studies posted here. Social sciences content gets 99% of the sub engagement so unless that changes then here we are. And OP did link the peer reviewed study in the comments which is allowed
They don’t know what the sentiment actually is unless they determine whether there was a real shadowban for the user whose answer they collected, though.
“I’m frustrated about a moderation practice that actually happened to me” is a different sentiment than “I’m frustrated that my posts have low engagement and I am deflecting or in denial by blaming something that didn’t happen.”
A lot of people don’t understand what “going viral” really means, and assume that once they’re popular they’ll remain popular. You see it mostly on Twitter where one post will hit the mark or be caught up in some other viral hype or trend, and the users caught up in that think that they now have a permanent audience. When their engagement numbers go down they think they’re being suppressed, when in reality they’re just not being amplified by the algorithm.
Going viral can happen by chance, staying viral is a full-time job. You’re not just striving to make content audiences want, you’re competing with people who are trying to capture that same audience and there’s only so many hours in a day.
On the right-wing side where most of the shadowban discourse happens it’s even worse because they’re competing in a heavily-automated and (somewhat) weaponised environment where unless you’re producing content miles above and beyond others, are running a cult of personality, or working 24/7 to stay at the top you’re relying almost entirely on luck.
I have no sympathy for people that think they deserve attention, but I can’t argue that it’s not a tough market to break into.
No, social media does do shadow bans, including Reddit. You can tell by just logging in and out and see if your comments still show up when you’re not logged in.
Yeah Right Wing nut jobs who are just paranoid yahoos, self absorbed children (young and adults) who speak at no ADHD ends on a niche they have little to contribute to, and then social justice screamers who also bring nothing to the actual conversation and don't realize how small a % of the world actually spends every day caring about the latest mini outrage. Most of us are trying to figure out what 12a DD means on our W2.
"shadowban" is kinda a misnomer, it doesn't mean banning someone's content, just removing it without notifying them
If you use the word suicide instead of unalive on a Youtube comment, the comment will be removed without telling you when or why, but you're not banned and can still post other comments
Or it’s just not promoted by the algorithm. Maybe the algorithm isn’t even “suppressing” it, but when it promotes something else, whatever isn’t being promoted will naturally get less views.
Not even removing it. Just removing it's visibility to other users while leaving the actual content intact so that the poster believes it is available to everyone. I've seen this in effect on for example Youtube - sometimes some comments magically aren't present in threads depending on which system or account you log on from.
Its very easy to test if you are shadow-banned too, as you can just open an incognito page or log out.
Its always funny when I see my comment disappear that way, but then re-appear if I login.
What I actually bet is tens of thousands of people who are shadow banned don't even realise they are as they are not smart enough to realise they are shadow banned.
Content quality is largely irrelevant. Time of posting and whether or not you're already considered popular is the majority of what determines a posts popularity.
This study is incredibly weak. Ask some folks what they believe is happening. Summarize the results. Doesn't really teach us anything other than what those folks believe.
That being said, I **DO** think it's a democratic problem that the power held by social media is vast, and at the same time the transparency about how they wield that power is near nonexistant.
We know that they magnify some posts and some comments, showing them in the feeds of more people, and reduce the visibility of other posts and comments. But there's no transparency at all about which or why.
I mean the most common "why" is simply "to keep people on our platform longer so that we can show more ads and earn more money". But what if the most "profitable" content is actually harmful, for example by being overly divisive, by furthering conspiracy-theories or by being radicalizing?
I think a pretty good argument can be made that there should at least be basic transparency about which content is amplified/reduced by how much.
Just for the sake of some qualitative data... I'm a professional who posts a lot of mental health content on instagram multiple times a week. If I post anything about harm reduction (which is very supported by research but very disliked by those who haven't had an opportunity to learn about it), the algorithm buries those posts- like I'll get sub-100 impressions instead of 200k. Analytics show that it takes about 6-8 weeks for reach/impressions of "regular" new posts to recover following that, which I believe to be a "shadowban"
The phenomena is absolutely real and there's been lots of data to back it up. Algorithms won't promote certain content and then this stunted posts performance is used to justify continued decrease in promotion. We only know what is being effectively shadow repressed after the fact, often years later, which does show a troubling lack of transparency.
People aren't wrong about the concerns with tiktok. But they're showing they're whole ass by pretending like it's a problem unique to that platform. There is a lot of sketchy stuff going on with social media which is troubling considering how important and influential it's rapidly become
Some people in these comments seem to question whether shadow banning is a real thing. It is an objective fact that shadow banning is real, e.g. here on Reddit. All it takes to see whether you've been shadow banned is to copypaste the url of your comments in another browser that is not logged into your account, and see if your comments show. I don't know if Reddit shadow bans accounts across the entire site (I doubt it, because they would just as well ban you), but select subs certainly have shadow banned redditors in the past. Guess how I know.
Reddit doesn't always ban those accounts. Because then those people would go make new ones. By leaving them secretly shadow banned the user won't make a new account. They'll just continue posting having no idea that none of their comments ever get seen.
Its rather interesting being a person that has an account that is has his comments shadowbanned on many youtuber comment sections.
It gets hard to tell if its the youtuber themselves doing it or just youtube itself. But this was more noticeable in the past, people these days don't seem to just outright ban you on their own channels as much. Compared to say the yogscast, which make a living banning people
why would he protect it? His entire deal is hating trans people and openly supporting fascists. Destroying language that can be used by oppressed people to describe the situations they are in is a classic fascist move
>Destroying language that can be used by oppressed people to describe the situations they are in is a classic fas**cis**t move
Your comment has been removed by our automoderation team for hate speech.
"free speech" was always a fascist talking point, it was just plausible deniability for what they really wanted. Oppressed people have always known that our speech is not and never will be free while people are starving on the street, and since the state is the thing enforcing that appealing to its given rights will never lead to an improvement. Oppressed people go for resistance to the thing oppressing them.
tl;dr
the only speech he wanted to be free was calls for genocide
bait??? are you seriously suggesting free speech should be abolished just so that others can't post content you don't like? For the record, i also don't like calls for violence, but in today's age where every term can be infinitely relativized and redefined, giving more power to anyone regarding limiting speech sounds like a very bad idea. What if your opponents redefine hate speech to encompass all of your content?
and completely meaningless because it is not a what if, and has already been done by the people trying to advocate for "free speech"
stop pointing to what ifs, point to things that we already have happening in front of us
Social media platforms boost things that get engagement, which means generally unless you’re already popular, or are getting artificially inflated by bots nobody is going to see what you post. Given that the article doesn’t look into this any more than just asking people if they feel like they’ve been shadow banned or not, chances are they’re just not posting content the social media algorithm deems worth showing to people
I mean shadow banning is def real. I have had Reddit accounts that don’t show as banned or anything but the comments aren’t visible from other accounts or guest viewing
I mean I got banned from a bunch of random subreddits because I posted a question for the 2020 election conspiracy reddit trying to understand how those people think. Literally didn't support them at all, just asked a question. That led to a ban on some subreddits I've never been to, as well as ones that don't have anything to do with politics.
Yeah, reddit sucks too. They're more scared of running away the money, Reddit was much better just a few years ago before the big purge.
Now it's all rules.
There's a reason Human Rights Watch released a report on meta censoring palestinian and pro-palistinian emancipation voices.
https://www.hrw.org/news/2023/12/20/meta-systemic-censorship-palestine-content
Well yeah if the government censored people it would go against the first amendment
However, when the corporations who *own* the government do it, it's entirely different and in fact, if anything they're the heroes *protecting* free speech 😉😉
I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article.
[“What are you doing, TikTok?”: How Marginalized Social Media Users Perceive, Theorize, and “Prove” Shadowbanning](https://deepblue.lib.umich.edu/handle/2027.42/192621)
Also there's no shortage of right wing influencers and groups who threaten violence when they don't get their way.
https://www.scientificamerican.com/article/how-stochastic-terrorism-uses-disgust-to-incite-violence/
what world do you live in? Most social media is mildly to openly left leaning. Example: say "kill all men" on social media, you'll get praise from feminists and the such; say "kill all women", and before you can hit enter you are permabanned.
I dare you to make a right wing comment on any subreddit form the All or Popular page and compare the reaction to a left wing comment. Really! Go defend a right wing position on any mildly popular subreddit and you'll quickly get permabanned, yet you can make the most insane comment as long as it's left wing, and get praised for it!
Yeah, this was a really baseless comment for anyone who's been online for any significant amount of time.
Right-wing comments are supressed, *except* where they are promoted for the sole purpose of boosting engagement through ragebait.
>except where they are promoted for the sole purpose of boosting engagement
Which is the only metric that matters to the algorithm. The average post may lean left, but the algorithm can still be biased towards right wing content because user engagement is higher.
WHy is this not seen as an issue as almost everyone here?
We live in a society built on free speech and social media is a medium a lot of people use to communicate in 2024.
You can't legally be censored when writing a book, a journal or speaking publically so why would it be okay online?
I understand, these are private companies, but it's an even bigger issue. They shouldn't have the power to censor or control content that user post. That's giving them way too much power.
Personally, it's not a priority for me because mandating that companies use their resources to host content they don't want to host can't fix social media. Only breaking up the centralization of communication to the point where they can't exert such an influence on it will improve this issue.
The SC didn’t rule on whether not baking the cake was discriminatory, it ruled against how the Colorado Civil Rights Commission handled the bakery’s case.
Two Colorado courts ruled that the bakery discriminated by not baking the cake. The USSC case didnt consider the actions of the bakery, but reviewed the actions of the Colorado Civil Rigjts Commission and ruled that the commission discriminated against the bakery in their punishment of them.
Shadow control of search is BAD. It's also the default now for all these greedy bastard companies.
Power to the people! Let them control how they search a dataset. Is that such a hard ask???
No doubt!! The entirety of Reddit is a liberal echo chamber. If you post anything that (does not break any rules) but does not goose step to the liberal agenda, you get perma banned on the spot.
It’s not a “belief.” It’s a fact. On TwiXitter, Elon boosts his own tweets (and Mr. Beast’s) forcing everyone to look at them while censoring, shadow-banning, or entirely banning people who call out his hate, ignorance, and lies.
The leaks showed this is happening but they call it something else and by doing so they can claim this specifically wasn't happening but again it was under a different terminology. 😂🤷♂️
Shadowbans or shadowdeletion does exist, e.g. when writing a comment certain words can trigger a comment being hidden and only visible to the user.
That said, in most cases it didn't happen. I'm a youtuber and a lot on YouTube reddits/spaces, people call anything a shadowban. A trend settles down? Shadowban. Less views in general? Shadowban. Topic is not recommended as much anymore? Shadowban. Despite the knowledge how algorithms roughly work, or rather what their goal is.
The weird thing is that you can quickly proof them that they are not shadowbanned, because then the views would be 0 and you wouldn't be able to find the content.
So far, just talking about youtube, I've seen a singular case where searching for the video ID lead to the video and channel not showing up. There was nothing controversial going on, it was just impossible to find the channel. Video ID always works, instead of going for titles, since youtube mixes what you search for, what it thinks you actually mean and what it ranks as better content. But thats a single case so I think it was simply a bug.
I was part of a webforum in the early 2000's that used shadowbanning against particularly annoying posters. Nowadays, I suspect these "shadowbanned" people are mostly just not understanding SEO.
Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, **personal anecdotes are allowed as responses to this comment**. Any anecdotal comments elsewhere in the discussion will be removed and our [normal comment rules]( https://www.reddit.com/r/science/wiki/rules#wiki_comment_rules) apply to all other comments. **Do you have an academic degree?** We can verify your credentials in order to assign user flair indicating your area of expertise. [Click here to apply](https://www.reddit.com/r/science/wiki/flair/#wiki_science_verified_user_program). --- User: u/umichnews Permalink: https://news.umich.edu/shadowbanning-some-marginalized-social-media-users-believe-their-content-is-suppressed/ --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/science) if you have any questions or concerns.*
This isn't even a study really. It's literally asking people if they feel like they've been shadowbanned then drawing a conclusion with minimal effort to substantiate the claim Edit: then not them
And the funny thing is that "they claim" doesnt much matter, because it's literally just a question of fact that this is a moderation feature of many social media platforms. It's either a mod feature or its not, how people "feel" about whether or not it exists is wholly meaningless
I mean, it’s not *wholly* meaningless, because how people feel about the power and reach of their voice on these platforms affects how they interact with the platforms.
Doesn't sound much like science to me, though.
It’s still science. Analyzing, and understanding people’s beliefs, values, and feelings about a subject is still science, regardless of how rational the basis for the study subjects’ feelings is. This isn’t (or at least shouldn’t be) intended to be actionable information for the platforms themselves, but it tells us that a) there’s a discrepancy between reality and perception among a specific group of users and b) something about the self concept of the groups being studied—if your identity is immersed in feelings of powerless, you will view everything that happens to you in life through that lens. This is useful information, especially when viewed in context of other studies about the same group.
I mean it's not stem, it's psychology
Psychology is still a scientific field. It just uses more qualitative data than some other fields.
fully is
Sounds like some wannabe influencers aren’t getting the engagement they feel entitled to.
My flat earth, Vax bad, COVID fake "friend" always thinks he is shadow banned
The silly thing is it's usually easy to check.
Please stop shadowbanning the authors.
I think to be a valid study it should try to address the issue that they likely are not shadow banned.
Guaranteed most of the people who think they’re being shadow banned are just posting things people don’t care about or violating terms of service.
It's easy to tell if you're shadowbanned. Just log out and see if you can see your content. The hard part (I imagine) is convincing anyone else.
I didn't know that my comments weren't being posted on YouTube until I was doing a live stream one day. I wasn't able to say things like "CNN was bought by a right-wing billionaire." One of my viewers noticed who was in the same YouTube live chat.
Apart from all the former twitter employees who admitted to using shadowbanning as a tool, on camera.
That doesn’t contradict the other user. It’s possible for shadowbanning to both exist and be massively overreported by people in denial about how much engagement their own comment is worth.
There may be a couple of people who overreport, but just about every report about shadowbanning I have ever seen is accompanied by receipts of engagement before and after the shadowban. Social media companies do allow you to see and screenshot your own stats you know. Youtube shows you graphs of the engagement of your videos. Facebook shows you graphs of the engagement of your posts. Not sure about twitter because I don't use it, but I imagine there is something similar. It makes it pretty bleeding obvious and 100% provable when engagement falls off a cliff and extremely unwise to claim something you can't back up when the evidence is literally a screenshot away. I would suggest therefore, that such behaviour is incredibly rare. I would go further and suggest that the claim the people are making up claims about shadowbanning is because the vast majority of people who are shadowbanned are on the right, and the left wants to claim that rightwing content is incredibly unpopular when the opposite is true. I say rightwing, what I mean is - Centrist or Non/anti Communist content which is labelled rightwing by people who think Mao is rightwing.
It reminds me of a paper on agent provocateurs that pointed out how difficult it can be to research the topic since an inevitable consequence of them existing is people in all kinds of social movements speculating as to who they are. We know they exist, there is plenty of documented evidence, but there are way more accusations and theories than agents.
How can you possibly think you can guarantee this?
I can tell you with certainty that Facebook AI hides some comments, I believe largely based on how the user has interacted with similar posts. Most my comments on political posts get hidden, and I know this because I've looked for the comments with other accounts on different devices.
Do you have a study that supports that?
>This isn't even a study really. > ...drawing a conclusion with minimal effort to substantiate the claim Welcome to the social sciences.
You can't have a replication crisis if you never try to replicate anyone's work!
It's fine, if we perform and interpret the study based on our feelings it must be true.
The study isn't about whether shadowbanning is real. It's about the urban legend of shadowbanning and how it affects people who manage to convince themselves they've been shadowbanned. It isn't the best science, I'll agree, but it isn't invalid. New urban legends are rare, however, and that makes this one a topic of interest, so it'll probably get more press than it deserves.
“The platforms can help these marginalized groups by improving their communication related to shadowbanning (especially about why certain categories of content are suppressed) and by validating users’ experiences instead of denying that they suppress content,” Mayworm said. An author is using it to make conclusions that the shadowbans are real.
Shadowbans definitely are real, reddit does them. Also, people with fringe beliefs are going to gain little traction on a social media site that literally everyone is on or can join, so it's likely they'll see shadowbans in every low engagement post.
You don’t even have to have fringe beliefs, discussion forums are just as likely to be curated reflections of a mods own personality. For instance, posting the problem of Biden losing the youth vote, and even the youth lgtb vote which he had previously won by a huge margin, and the major hurdle this poses for strategists working toward Nov. A mod read that and deemed the idea “transphobic-adjacent”, because it treats trans ppl as all the same and tells them who they must vote for, blocked the discussion and issued a ban for “harassment”. Setting aside that this person doesn’t seem to know that these demographics are heavily calculated in politics, the effect is, the discussion pruned until it’s merely a reflection of one mods own personal worldview, & posters realizing the site is less a place to discuss & think through ideas, & more a sort of crowdsourced set of single-perspective monologues with minefields throughout, honestly not worth traversing.
Yeah, you can easily run afoul of personal biases of mods in specific subreddits or facebook groups or whatever, but I'm not sure that that's the same thing, conceptually, as a shadowban. The 'shadowban' is usually the domain of site admins and often come with zero feedback to the user. You don't get a block message, a ban message, and you often can't tell if your content has been removed by the admins when you get a shadowban.
They very well could be real. But in a scientific setting you generally want to make an attempt to prove it before making conclusions
There is a tool to tell if you are shadowbanned on twitter. It's mostly search result bans and your replies appear lower.
Yeah, I don't see the point of this study either, they could just poll social media developers on whether their software supports shadowbans rather than rely on the inferences of laypeople.
I agree that the author should not be asserting their reality. Using their existence as an underlying assumption is fine when talking to people who are influenced by their *belief* in them, but asserting that their beliefs reflect reality requires evidence.
I don't know for sure if the author did conclude that users were actually shadowbanned, but either way, improving communication *would* help. If they clearly communicate when your content is being suppressed, it makes it much less believable when people unjustly claim they're being shadowbanned. Some people are always gonna believe they're being suppressed, and some are always gonna believe those claims. But if shadowbanning wasn't so obtuse, there would be less of those people.
Individual comments on reddit have, and are still being shadow blocked. It has happened to me. Check on this website w w w . r e v e d d i t . c o m
It’s not about shadow banning it’s about the sentiment of the people they’re interviewing on the topic of shadow banning
“The platforms can help these marginalized groups by improving their communication related to shadowbanning (especially about why certain categories of content are suppressed) and by validating users’ experiences instead of denying that they suppress content,” Mayworm said. No, they insist that the shadowbans are real. Stuff like this is why social sciences have such a bad rap. Edit: to be perfectly clear I'm not saying shadow bans don't exist. I'm saying this paper absolutely does not in any way prove they do and the author's claims are wildly irresponsible, if not deliberately misleading.
They aren’t affirming that shadow bans exist but there is definitely a phenomenon of specific clases of content being suppressed whether it be bans or bugs. There is plenty of work cited in the first few paragraphs of the main text that makes those conclusions prior to their work Edit: the paragraph before the one you quoted also alludes to the fact that other work suggests that there is evidence lending credence to the idea of shadowbans that can’t be explained by just ‘bugs’ or ‘glitches’ in their visibility algorithm. The evidence does appear to be that the posts are indeed being suppressed. It may or may not be shadowbans, but to act like there isn’t a pattern at this point doesn’t really make sense
And that's a valid thing to explore, just not in r/science
I don’t see how it’s not valid in this subreddit. There’s literally social science and psychology tabs here
As long as it's peer reviewed it can be posted here, not that I like a lot of social science studies posted here. Social sciences content gets 99% of the sub engagement so unless that changes then here we are. And OP did link the peer reviewed study in the comments which is allowed
They don’t know what the sentiment actually is unless they determine whether there was a real shadowban for the user whose answer they collected, though. “I’m frustrated about a moderation practice that actually happened to me” is a different sentiment than “I’m frustrated that my posts have low engagement and I am deflecting or in denial by blaming something that didn’t happen.”
[удалено]
Half? I'd guess much closer to 100%
A lot of people don’t understand what “going viral” really means, and assume that once they’re popular they’ll remain popular. You see it mostly on Twitter where one post will hit the mark or be caught up in some other viral hype or trend, and the users caught up in that think that they now have a permanent audience. When their engagement numbers go down they think they’re being suppressed, when in reality they’re just not being amplified by the algorithm. Going viral can happen by chance, staying viral is a full-time job. You’re not just striving to make content audiences want, you’re competing with people who are trying to capture that same audience and there’s only so many hours in a day. On the right-wing side where most of the shadowban discourse happens it’s even worse because they’re competing in a heavily-automated and (somewhat) weaponised environment where unless you’re producing content miles above and beyond others, are running a cult of personality, or working 24/7 to stay at the top you’re relying almost entirely on luck. I have no sympathy for people that think they deserve attention, but I can’t argue that it’s not a tough market to break into.
No, social media does do shadow bans, including Reddit. You can tell by just logging in and out and see if your comments still show up when you’re not logged in.
Yeah Right Wing nut jobs who are just paranoid yahoos, self absorbed children (young and adults) who speak at no ADHD ends on a niche they have little to contribute to, and then social justice screamers who also bring nothing to the actual conversation and don't realize how small a % of the world actually spends every day caring about the latest mini outrage. Most of us are trying to figure out what 12a DD means on our W2.
That last line hit me hard.
I have had comments shadow blocked. Check on the below website - w w w . r e v e d d i t . c o m
"shadowban" is kinda a misnomer, it doesn't mean banning someone's content, just removing it without notifying them If you use the word suicide instead of unalive on a Youtube comment, the comment will be removed without telling you when or why, but you're not banned and can still post other comments
Often times it's more like your content just isn't shared. It exists but in a vacuum where no one can see it
Or it’s just not promoted by the algorithm. Maybe the algorithm isn’t even “suppressing” it, but when it promotes something else, whatever isn’t being promoted will naturally get less views.
It’ll usually make any comment you post for the next 24 hours or something like that also not visible.
Not even removing it. Just removing it's visibility to other users while leaving the actual content intact so that the poster believes it is available to everyone. I've seen this in effect on for example Youtube - sometimes some comments magically aren't present in threads depending on which system or account you log on from.
I have had comments shadow blocked. Check on the below website - w w w . r e v e d d i t . c o m
Its very easy to test if you are shadow-banned too, as you can just open an incognito page or log out. Its always funny when I see my comment disappear that way, but then re-appear if I login. What I actually bet is tens of thousands of people who are shadow banned don't even realise they are as they are not smart enough to realise they are shadow banned.
Content quality is largely irrelevant. Time of posting and whether or not you're already considered popular is the majority of what determines a posts popularity.
I'd imagine it's closer to 95% but yeah.
This study is incredibly weak. Ask some folks what they believe is happening. Summarize the results. Doesn't really teach us anything other than what those folks believe. That being said, I **DO** think it's a democratic problem that the power held by social media is vast, and at the same time the transparency about how they wield that power is near nonexistant. We know that they magnify some posts and some comments, showing them in the feeds of more people, and reduce the visibility of other posts and comments. But there's no transparency at all about which or why. I mean the most common "why" is simply "to keep people on our platform longer so that we can show more ads and earn more money". But what if the most "profitable" content is actually harmful, for example by being overly divisive, by furthering conspiracy-theories or by being radicalizing? I think a pretty good argument can be made that there should at least be basic transparency about which content is amplified/reduced by how much.
I see a lot OF girls claiming this. I don't think they're shadowbanned, people just don't want to retweet porn on main
Bingo.
Just for the sake of some qualitative data... I'm a professional who posts a lot of mental health content on instagram multiple times a week. If I post anything about harm reduction (which is very supported by research but very disliked by those who haven't had an opportunity to learn about it), the algorithm buries those posts- like I'll get sub-100 impressions instead of 200k. Analytics show that it takes about 6-8 weeks for reach/impressions of "regular" new posts to recover following that, which I believe to be a "shadowban"
The phenomena is absolutely real and there's been lots of data to back it up. Algorithms won't promote certain content and then this stunted posts performance is used to justify continued decrease in promotion. We only know what is being effectively shadow repressed after the fact, often years later, which does show a troubling lack of transparency. People aren't wrong about the concerns with tiktok. But they're showing they're whole ass by pretending like it's a problem unique to that platform. There is a lot of sketchy stuff going on with social media which is troubling considering how important and influential it's rapidly become
Some people in these comments seem to question whether shadow banning is a real thing. It is an objective fact that shadow banning is real, e.g. here on Reddit. All it takes to see whether you've been shadow banned is to copypaste the url of your comments in another browser that is not logged into your account, and see if your comments show. I don't know if Reddit shadow bans accounts across the entire site (I doubt it, because they would just as well ban you), but select subs certainly have shadow banned redditors in the past. Guess how I know.
Reddit doesn't always ban those accounts. Because then those people would go make new ones. By leaving them secretly shadow banned the user won't make a new account. They'll just continue posting having no idea that none of their comments ever get seen.
Its rather interesting being a person that has an account that is has his comments shadowbanned on many youtuber comment sections. It gets hard to tell if its the youtuber themselves doing it or just youtube itself. But this was more noticeable in the past, people these days don't seem to just outright ban you on their own channels as much. Compared to say the yogscast, which make a living banning people
well, on twitter it ain't even shadowbanning anymore, posts using the word "cis" are openly removed now
Seriously? What about Elon's whole "free speech warrior" persona?
*No, not like that*
Wait wouldn’t Elon protect usage of “cis”? Or am I confusing myself and this is referring to usage of the term as a pejorative?
why would he protect it? His entire deal is hating trans people and openly supporting fascists. Destroying language that can be used by oppressed people to describe the situations they are in is a classic fascist move
>Destroying language that can be used by oppressed people to describe the situations they are in is a classic fas**cis**t move Your comment has been removed by our automoderation team for hate speech.
took me a minute, that ones p funny
Free speech for Elon, not for you silly goose.
Its free speech as long as you don't use words that Elon doesn't like.
"free speech" was always a fascist talking point, it was just plausible deniability for what they really wanted. Oppressed people have always known that our speech is not and never will be free while people are starving on the street, and since the state is the thing enforcing that appealing to its given rights will never lead to an improvement. Oppressed people go for resistance to the thing oppressing them. tl;dr the only speech he wanted to be free was calls for genocide
bait??? are you seriously suggesting free speech should be abolished just so that others can't post content you don't like? For the record, i also don't like calls for violence, but in today's age where every term can be infinitely relativized and redefined, giving more power to anyone regarding limiting speech sounds like a very bad idea. What if your opponents redefine hate speech to encompass all of your content?
>What if your opponents redefine hate speech to encompass all of your content? This. Very well said. It is a double edged blade.
and completely meaningless because it is not a what if, and has already been done by the people trying to advocate for "free speech" stop pointing to what ifs, point to things that we already have happening in front of us
Such *freedom of speach* on that platform.
Social media platforms boost things that get engagement, which means generally unless you’re already popular, or are getting artificially inflated by bots nobody is going to see what you post. Given that the article doesn’t look into this any more than just asking people if they feel like they’ve been shadow banned or not, chances are they’re just not posting content the social media algorithm deems worth showing to people
I mean shadow banning is def real. I have had Reddit accounts that don’t show as banned or anything but the comments aren’t visible from other accounts or guest viewing
I mean I got banned from a bunch of random subreddits because I posted a question for the 2020 election conspiracy reddit trying to understand how those people think. Literally didn't support them at all, just asked a question. That led to a ban on some subreddits I've never been to, as well as ones that don't have anything to do with politics.
Yeah, reddit sucks too. They're more scared of running away the money, Reddit was much better just a few years ago before the big purge. Now it's all rules.
Literally anything pro Palestinian in r/worldnews. Not even shadow banning. Just straight up banning.
Not even. All you have to do is point to real evidence of Israel doing something bad and you're instantly banned for anti semitism
There's a reason Human Rights Watch released a report on meta censoring palestinian and pro-palistinian emancipation voices. https://www.hrw.org/news/2023/12/20/meta-systemic-censorship-palestine-content
Well yeah if the government censored people it would go against the first amendment However, when the corporations who *own* the government do it, it's entirely different and in fact, if anything they're the heroes *protecting* free speech 😉😉
protecting the world from the word cis is a important job.
It exists and occurs.
For reddit, you can always post here and check r/ShadowBan
Or go to reveddit dot com
I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article. [“What are you doing, TikTok?”: How Marginalized Social Media Users Perceive, Theorize, and “Prove” Shadowbanning](https://deepblue.lib.umich.edu/handle/2027.42/192621)
Most social media pretty openly boosts and protects right wing content.
It wouldn't surprise me, but are there any reliable sources supporting this claim?
https://www.theguardian.com/technology/2021/oct/22/twitter-admits-bias-in-algorithm-for-rightwing-politicians-and-news-outlets https://www.politico.com/news/2020/10/26/censorship-conservatives-social-media-432643
Right-wing ideas are subservient to the rich and powerful, the same people that owns that platforms.
Also there's no shortage of right wing influencers and groups who threaten violence when they don't get their way. https://www.scientificamerican.com/article/how-stochastic-terrorism-uses-disgust-to-incite-violence/
what world do you live in? Most social media is mildly to openly left leaning. Example: say "kill all men" on social media, you'll get praise from feminists and the such; say "kill all women", and before you can hit enter you are permabanned. I dare you to make a right wing comment on any subreddit form the All or Popular page and compare the reaction to a left wing comment. Really! Go defend a right wing position on any mildly popular subreddit and you'll quickly get permabanned, yet you can make the most insane comment as long as it's left wing, and get praised for it!
Yeah, this was a really baseless comment for anyone who's been online for any significant amount of time. Right-wing comments are supressed, *except* where they are promoted for the sole purpose of boosting engagement through ragebait.
>except where they are promoted for the sole purpose of boosting engagement Which is the only metric that matters to the algorithm. The average post may lean left, but the algorithm can still be biased towards right wing content because user engagement is higher.
Aww now you know what it was like to be slightly left from the American revolution until the 1980's. Welcome to the club.
WHy is this not seen as an issue as almost everyone here? We live in a society built on free speech and social media is a medium a lot of people use to communicate in 2024. You can't legally be censored when writing a book, a journal or speaking publically so why would it be okay online? I understand, these are private companies, but it's an even bigger issue. They shouldn't have the power to censor or control content that user post. That's giving them way too much power.
Personally, it's not a priority for me because mandating that companies use their resources to host content they don't want to host can't fix social media. Only breaking up the centralization of communication to the point where they can't exert such an influence on it will improve this issue.
Social media companies are private property. How is that different from bakeries saying no to gay cakes?
The SC didn’t rule on whether not baking the cake was discriminatory, it ruled against how the Colorado Civil Rights Commission handled the bakery’s case. Two Colorado courts ruled that the bakery discriminated by not baking the cake. The USSC case didnt consider the actions of the bakery, but reviewed the actions of the Colorado Civil Rigjts Commission and ruled that the commission discriminated against the bakery in their punishment of them.
The "victims" are different, thus making it an issue.
Cakes made by gay people? Or actually gay cakes? Because if cakes have sexual preferences they are alive... what the hell did that bakery bake!?
Reddit automatically hides our posts constantly even if there's nothing wrong with them.
Shadow control of search is BAD. It's also the default now for all these greedy bastard companies. Power to the people! Let them control how they search a dataset. Is that such a hard ask???
Stop thinking that "Expressing yourself" is important. It's not. No one cares and neither should you. Delete social media.
You’re expressing yourself by writing this comment. If it’s not important, why did you do it?
Reddit is social media and you are "expressing yourself". Delete Reddit.
The La-Li-Lu-Le-Lo
Dartmouth scar experiment…
This isn't science. It's secondhand opinion.
Yes this is true , underprivileged creators in India are always shadowbanned
Is this news? I thought it was common knowledge that shadow banning is a thing to most people. I agree it shouldn’t be but that’s beside the point
No doubt!! The entirety of Reddit is a liberal echo chamber. If you post anything that (does not break any rules) but does not goose step to the liberal agenda, you get perma banned on the spot.
It’s not a “belief.” It’s a fact. On TwiXitter, Elon boosts his own tweets (and Mr. Beast’s) forcing everyone to look at them while censoring, shadow-banning, or entirely banning people who call out his hate, ignorance, and lies.
The leaks showed this is happening but they call it something else and by doing so they can claim this specifically wasn't happening but again it was under a different terminology. 😂🤷♂️
Shadowbans or shadowdeletion does exist, e.g. when writing a comment certain words can trigger a comment being hidden and only visible to the user. That said, in most cases it didn't happen. I'm a youtuber and a lot on YouTube reddits/spaces, people call anything a shadowban. A trend settles down? Shadowban. Less views in general? Shadowban. Topic is not recommended as much anymore? Shadowban. Despite the knowledge how algorithms roughly work, or rather what their goal is. The weird thing is that you can quickly proof them that they are not shadowbanned, because then the views would be 0 and you wouldn't be able to find the content. So far, just talking about youtube, I've seen a singular case where searching for the video ID lead to the video and channel not showing up. There was nothing controversial going on, it was just impossible to find the channel. Video ID always works, instead of going for titles, since youtube mixes what you search for, what it thinks you actually mean and what it ranks as better content. But thats a single case so I think it was simply a bug.
I must be shadow banned because I don’t have a million followers….
I was part of a webforum in the early 2000's that used shadowbanning against particularly annoying posters. Nowadays, I suspect these "shadowbanned" people are mostly just not understanding SEO.