“Algospeak” is an evasion tactic for automated moderation on social media, where users create new words to use in place of keywords that might get picked up by AI-powered filters. People might refer to dead as “unalive,” or sex as “seggs,” or porn as “corn” (or simply the corn emoji).
There’s an algospeak term for Palestinians as well: “P*les+in1ans.” Its very existence speaks to a concern among many people posting and sharing pro-Palestine content during the war between Hamas and Israel that their posts are being unfairly suppressed. Some users believe their accounts, along with certain hashtags, have been “shadowbanned” as a result.
Algospeak is just one of the user-developed methods of varying effectiveness that are supposed to dodge suppression on platforms like TikTok, Instagram, and Facebook. People might use unrelated hashtags, screenshot instead of repost, or avoid employing Arabic hashtags to attempt to evade apparent but unclear limitations on content about Palestine. It’s not clear whether these methods really work, but their spread among activists and around the internet speaks to the real fear of having this content hidden from the rest of the world.
“Shadowbanning” gets thrown around a lot as an idea, is difficult to prove, and is not always easy to define. Below is a guide to its history, how it manifests, and what you as a social media user can do about it.
What is shadowbanning?
Shadowbanning is an often covert form of platform moderation that limits who sees a piece of content, rather than banning it altogether. According to a Vice dive into the history of the term, it likely originates as far back as the internet bulletin board systems of the 1980s.
In its earliest iterations, shadowbanning worked kind of like a digital quarantine: Shadowbanned users could still log in and post to the community, but no one else could see their posts. They were present but contained. If someone was shadowbanned by one…
Read the full article here