A Tale of Twitter Policy, Propaganda, and Pornography
Twitter on Monday unveiled a new "community-driven approach" to misleading information on its platform, allowing users to add notes to tweets they believe are false in an attempt to "add context" for other users.
The program titled, “Birdwatch” will mean that no account and no tweet is exempt from annotation, meaning users can "add context" to tweets posted by news outlets, reporters, and elected officials.
It is alluded that the primary motivation behind this program was the “harmful” spread of “misinformation” regarding the 2020 presidential election.
The social media company banned the President following the Capitol Hill incident. In the last year, an election year, Twitter has also been fact-checking the majority of the President’s tweets when it came to statements regarding the pandemic and voter fraud.
Twitter updated their Civic Integrity policy after the Capitol incident to, "aggressively increase our enforcement action" on misleading and false claims surrounding the 2020 presidential election, which "has been the basis for incitement to violence around the country."
Twitter would then remove more than 70,000 accounts that were engaged in sharing "harmful QAnon-associated content."
Another instance of Twitter’s controversial policy enforcement is when they blocked a New York Post article about a true Hunter Biden story during the election. This incident showed that there could be a potential bias of their “fact checks” on critiques of Democrats rather than critiques of Republicans.
If Twitter was a private publisher, this would be immoral but unavoidable. However, it is a “platform” that, because of Section 230, faces no responsibility for what is tweeted. Twitter gets to censor when wanted, but also ignore hate speech and libel when desired.
Twitter did finally suspended several Antifa accounts with a combined 71 thousand followers after the Biden Inauguration Day Riots. Good first start for Twitter after arguably ignoring Antifa on their platform during the peak of riots last summer.
Another big problem on Twitter is foreign propaganda. Nonprofit journalism site, ProPublica, tracked more than 10,000 suspected fake Twitter accounts involved in a coordinated influence campaign with ties to the Chinese government. Among those are the hacked accounts of users from around the world that posted propaganda and disinformation about the coronavirus outbreak, the Hong Kong protests and other topics of state interest.
This is a situation were Twitter's fact checks could have been effectively applied.
However, monitoring political discourse and propaganda should be the last of Twitter's focuses. There are numerous instances of solicitation, sex trafficking, human trafficking, and child pornography flourishing on the platform.
Twitter refused to take down widely shared solicited explicit images and videos of a teenage sex trafficking victim, between the ages of 13 and 14, because an investigation “didn’t find a violation” of the company’s “policies,” a scathing lawsuit alleges.
The explicit material was solicited from the minor by sex traffickers posing to be a 16-year old girl. The traffickers then blackmailed the minor by threatening to release everything to his community unless he obliged to their requests. One of their requests demanded him to send explicit material with another minor. The minor obliged until he eventually blocked the sex traffickers. The harassment would come to an end.
A few years later in 2019, the videos surfaced on Twitter under two accounts that were known to share child sexual abuse material, court papers allege.
The videos would be reported to Twitter at least three times, first on Dec. 25, 2019, but Twitter failed to act until a federal law enforcement officer got involved, the suit states.
The teen became aware of the tweets in January 2020 after being viewed by his classmates. As his parents contacted the school and made police reports he filed a complaint with Twitter saying there were two tweets depicting child pornography of himself that needed to be removed because they were illegal, harmful, and in violation of the site’s policies.
On January 28, Twitter replied to the teen and said they wouldn’t be taking down the material, which had already reached over 167,000 views and 2,223 retweets, the suit stated.
“Thanks for reaching out. We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time,” the response reads, according to the lawsuit.
“What do you mean you don’t see a problem? We both are minors right now and were minors at the time these videos were taken. We both were 13 years of age. We were baited, harassed, and threatened to take these videos that are now being posted without our permission. We did not authorize these videos AT ALL and they need to be taken down,” the teen replied to Twitter.
The teen even included his case number from a local law enforcement agency, but Twitter allegedly ignored him.
Two days later, the mother of the teen was connected with an agent from the Department of Homeland Security through a mutual contact who successfully had the videos removed on January 30.
“Only after this take-down demand from a federal agent did Twitter suspend the user accounts that were distributing the CSAM and report the CSAM to the National Center on Missing and Exploited Children,” states the suit, filed by the National Center on Sexual Exploitation and two law firms.
Twitter declined to comment to multiple networks but would release this statement.
“Twitter has zero-tolerance for any material that features or promotes child sexual exploitation. We aggressively fight online child sexual abuse and have heavily invested in technology and tools to enforce our policy, a Twitter spokesperson wrote.
“Our dedicated teams work to stay ahead of bad-faith actors and to ensure we’re doing everything we can to remove content, facilitate investigations, and protect minors from harm — both on and offline.”
This is one recent example, a 2019 statement from the The National Center on Sexual Exploitation stated, “accounts posting and selling pornographic content, from images to videos to live-streams, and even “escort” dates, are undeniably flourishing on Twitter.”
Twitter's immense efforts of biased and inconsistent enforcement of their policy on political discourse could be better directed at issues involving solicitation, trafficking, and pornography. At the end of the day they are protected by the Section 230 clause so they can proceed into the future however they wish. However, looking back at these instances we can see what Twitter’s objectives actually are focused on.