Neiwert, who writes for the Daily Kos, did not return a request for comment but he told the Daily Beast that his suspension was over this 2017 book cover. The book, Alt America: The Rise of the Radical Right in the Age of Trump, is about the history of the radical right over the past three decades, and it's illustrated with KKK hoods atop an American flag. It's been his Twitter banner image for some time, and when Twitter demanded he remove it, he refused on principle.
When asked for comment, a representative from Twitter said that Neiwert was flagged for violating the policy against including "violent, hateful, or adult content within areas that are highly visible on Twitter, including in live video, profile or header images." Announced in March, the "sensitive media policy" bans "hateful imagery," which, according to Twitter, includes "any logo, symbol, or image that has the intention to promote hostility against people on the basis of race, religious affiliation, disability, sexual orientation, gender/gender identity or ethnicity/national origin." As examples, they cite Nazi swastikas, and while KKK hoods are certainly hateful, there's an big difference between putting one on your head and burning crosses in someone's yard, and using a drawing of one on the cover of a bookâespecially when that book is a critique of racist extremism.
The irony here is that Neiwert himself is a proponent of social media bans. He told the Daily Beast that he thinks his account was targeted by "Twitter trolls" in retaliation for reporting their accounts, and while there is obviously a difference between posting hateful content and commenting on hateful content, this was absolutely bound to happen when tech companies moderate users' speech. There are an estimated 500 million tweets sent every day. It would be pretty damn costly for human beings to moderate all that content, if it's even possible, and so companies like Twitter use artificial intelligence and algorithms to do it instead. Of course they get it wrong! Nuance isn't exactly a robot's strong suit. Wrongful suspensions and bans are exactly what's going to happen when we demand censorship via corporations.
It's not just Twitter. Last week, YouTube announced new content guidelines after Vox's Carlos Maza pitched a fit over a comic calling him names. That very same day, the LA Times reports, an interview with a British Holocaust denier that was posted by the Southern Poverty Law Center was removed by the platform. âWe know that this might be disappointing, but itâs important to us that YouTube is a safe place for all. If content breaks our rules, we remove it,â YouTube told the SPLC in an email. The intention might be noble, but it's hard to see how censoring legitimate reporting is going to be good for social justice.
Collateral damage is inevitable when society embraces censorship, either through government or the private sector. It may not be deliberate, but there is just no way for social media platforms to police all of their user-generated content. There are an estimated 500 hours of content uploaded to YouTube every minute. There's no algorithm or human being capable of policing that, and demanding that they do means that more people like David Neiwert will find themselves abruptly kicked off.