Matt Taibbi used to be a particularly sharp-tongued left-wing wiseass journalist. Unfortunately for him, he was afflicted by inconvenient streak of intellectual honesty that caused him to take note of some of the Left’s inconsistencies and irrationalities.
Taibbi was sufficiently indiscreet about those kinds of insights that it cost him a well-paying and prestigious position in the media establishment. He’s now publishing high quality opinion pieces on Substack. I recommend those who can afford it it chip in with a subscription. He’s paid the price for speaking the truth, and I believe his voice will become more and more valuable over time.
Facebook announced Tuesday that itâ€™s stepping up efforts to clean its platform of QAnon content:
Starting today, we will remove any Facebook Pages, Groups and Instagram accounts representing QAnon, even if they contain no violent contentâ€¦
Facebook had already taken several rounds of action against QAnon, including the removal this summer of â€œover 1,500 Pages and Groups.â€ Restricting bans to groups featuring â€œdiscussions of potential violenceâ€ apparently didnâ€™t do the trick, however, so the platform expended bans to include content â€œtied to real world harmâ€:
Other QAnon content [is] tied to different forms of real world harm, including recent claims that the west coast wildfires were started by certain groups, which diverted attention of local officials from fighting the fires and protecting the public.
Describing what QAnon is, in a way that satisfies what its followers would might say represents their belief system and separates out the censorship issue, is not easy. The theory is constantly evolving and not terribly rational. Itâ€™s also almost always described by mainstream outlets in terms that implicitly make the case for its banning, referencing concepts like â€œoffline harmâ€ or the above-mentioned â€œreal-world harmâ€ in descriptions. As youâ€™re learning what QAnon is, youâ€™re usually also learning that it is not tolerable or safe. …
[T]he Q ban pulls the curtain back on one of the more bizarre developments of the Trump era, the seeming about-face of the old-school liberals who were once the countryâ€™s most faithful protectors of speech rights.
Bring up bans of QAnon or figures like Alex Jones (or even the suppression or removal of left-wing outlets like the World Socialist Web Site, teleSUR, or the Palestinian Information Centre) and youâ€™re likely to hear that the First Amendment rights of companies like Facebook and Google are paramount. Weâ€™re frequently reminded there is no constitutional issue when private firms decide they donâ€™t want to profit off the circulation of hateful, dangerous, and possibly libelous conspiracy theories.
That argument is easy to understand, but it misses the complex new reality of speech in the Internet era. It is true that the First Amendment only regulates government bans. However, what do we a call a situation when the overwhelming majority of news content is distributed across a handful of tech platforms, and those platforms are â€” openly â€” partners with the federal government, and law enforcement in particular?
In my mind, this argument became complicated in 2017, when the Senate Intelligence Committee dragged Facebook, Twitter, and Google to the Hill and essentially ordered them to come up with a â€œmission statementâ€ explaining how they would prevent the â€œfomenting of discord.â€
Platforms that previously rejected the idea they were in the editing business â€” â€œWe are a tech company, not a media company,â€ said Mark Zuckerberg just a year before, in 2016, after meeting with the Pope â€” soon were agreeing to start working together with congress, law enforcement, and government-affiliated groups like the Atlantic Council. They pledged to target foreign interference, â€œdiscord,â€ and other problems.
Their decision might have been accelerated by a series of threats to increase regulation and taxation of the platforms, with Virginia Senator Mark Warnerâ€™s 23-page white paper in 2018 proposing new rules for data collection being just one example. Whatever the reason for the about-face, the tech companies now work with the FBI in what the Bureau calls â€œprivate sector partnerships,â€ which involve â€œstrategic engagementâ€¦ including threat indicator sharing.â€
Does any of this make â€œprivateâ€ bans of content a First Amendment issue? The answer I usually get from lawyers is â€œprobably not,â€ but itâ€™s not clear-cut. It doesnâ€™t take much imagination to see how this could go sideways quickly, as the same platforms the FBI engages with often have records of working with security services to suppress speech in clearly inappropriate ways in other countries.
As far back as 2016, for instance, Israelâ€™s Justice Minister Ayelet Shaked was saying that Facebook and Google were complying with up to â€œ95 percentâ€ of its requests for content deletion. The minister noted cheerfully that the rate of cooperation had just risen sharply. Hereâ€™s how Reuters described the sudden burst of enthusiasm on the part of the platforms to cooperate with the state:
Perhaps spurred by the ministerâ€™s threat to legislate to make companies open to prosecution if they host images or messages that encourage terrorism, their rate of voluntary compliance has soared from 50 percent in a year, she said.
Whether or not one views Internet bans as censorship or a First Amendment issue really depends on how much one buys concepts like â€œvoluntary compliance.â€
The biggest long-term danger in all of this has always centered on the unique situation of media distribution now being concentrated in the hands of a such a relatively small number of companies. Instead of breaking up these oligopolies, or finding more transparent ways of dealing with speech issues, there exists now a temptation for governments to leave the power of these opaque behemoth companies intact, and appropriate their influence for their own sake.
As weâ€™ve seen abroad, a relatively frictionless symbiosis can result: the platforms keep making monster sums, while security services, if they can wriggle inside the tent of these distributors, have an opportunity to control information in previously unheard-of ways. Particularly in a country like the United States, which has never had a full-time federal media regulator, such official leverage would represent a dramatic change in our culture. As one law professor put it to me when I first started writing about the subject two years ago, â€œWhat government doesnâ€™t want to control what news you see?â€