The Free Speech Problem

Most everyone agrees that online harassment is a major problem in need of an immediate solution, but in the hunt for trolls, some are too quick to dismiss legitimate concerns about free speech.

(Cross-posted at Medium)

In the wake of the GamerGate blowup, most of America is aware of our epidemic of online harassment. Unrepentant trolls on Twitter, Facebook, and similar services exploit anonymity and the ease of creating sockpuppet accounts to stalk, threaten, dox, and torment victims, even driving some to the point of suicide. But while activists rightly raise alarms about the problem, their proposed solutions often carry the risk of limiting the free speech and expression that make the Internet so powerful.

Writing at Boing Boing, for example, Glenn Fleishman explores Twitter’s problem with serial offenders, known trolls who have been banned but then return thanks to Twitter’s failure to enforce their own policy against serial accounts. Roughly two-thirds through the piece, Fleishman leaps abruptly to a rather dramatic conclusion: “the fight for anonymous speech ends when promotion of it is inexorably and demonstrably linked to enabling harassers.”

“You can see this in every online forum,” Fleishman continues, “every comment thread, every social network: where speech has no consequence, harassment flourishes.”

He’s not wrong about that second point, though he leaves out an important detail: One need not turn to the Internet to see the harassment that results from anonymous speech. Head to your local sports arena, and observe the father teaching his child to call the opposing team a bunch of fags. Take in the conversation at your neighborhood bar on any given evening–or ask a friend to hold a couple of microphones as she walks down the sidewalk. Though it may be more frequent (and perhaps more concentrated) on the Internet, anonymous harassment is not solely a digital phenomenon.

Humans are prone to dark urges, and they way we choose to deal with them depends on a lot of factors–our sense of anonymity and consequence, sure, but also our upbringing and values, our internalized and cultural etiquette, and what one might describe as social IQ. As the Internet has made the world smaller and brought each of us into contact with thousands or millions of others, it’s exposed us to people whose etiquette and values may be vastly different from our own, others who suffer mental illness or neurological disability, and others who are just sociopathic assholes.

This last point is not to excuse or justify the behavior of trolls and harassers, but to highlight the problem of defining “harassment” in context. For example: Read some accounts of interactions with Benjanun Sriduangkaew, AKA Requires Hate, AKA Winterwolf, and few would dispute the use of the term “harassment.” Ask an activist opposing street harassment, however, and a definition of harassment might include greeting a stranger with “Hello there.”

In his Boing Boing piece, Fleishman quotes a tweet from Model View Culture editor Shanley Kane, that it’s “patently false that we need to protect the abusive speech of dominant groups to protect the speech of marginalized groups.” Kane’s point is valid, but it’s also a strawman; few who question the means of preventing harassment are advocating the defense of “abusive speech.” They are, rightly, concerned about the challenge of defining “abusive,” and of policing true abuse without chilling free speech.

In the world outside the Internet, this line is not particularly vague. There are legal definitions for harassment, stalking, terrorist threats, inciting riots, and similar offenses. In many states, a person can face homicide charges for talking someone into suicide, and courts have consistently said that true threats are not speech protected under the First Amendment.

Asking law enforcement entities to police online communication does not seem feasible at the present. Even if police treated Twitter threats and YouTube comments with the same gravity they treat threatening phone calls or letters (many don’t), the sheer volume of online speech that might warrant criminal investigation would cripple our existing agencies. The press often reports that the Secret Service is stretched thin pursuing the record number of credible threats they receive against President Obama; imagine the workload if they must begin protecting Anita Sarkeesian, Alex from Target, and every teenage fan of Justin Bieber.

In the absence of obvious solutions, many who regard online harassment as a crisis in need of solution have directed criticism at the social networks themselves. One promo I saw recently for a roundtable discussion began with a question: “Do you think Twitter and Facebook would do more about harassment if they were founded by women or people of color?”

This kind of question might be fair to ask, but it seems to ignore the core problem at a network like Twitter, which isn’t ignorant to the problem of harassment so much as it is vigilant about its defense of free speech. As a private company, Twitter is under no obligation to respect Constitutional definitions of free speech, or to protect free speech in any form; they do so because they believe in the value of a free and open exchange. Twitter’s founders and staff have repeatedly shown themselves to be ardent defenders of free speech, as in 2010 when they went to the ACLU rather than honor a subpoena from Pennsylvania’s then-Attorney General Tom Corbett, seeking the identities of those who criticized him online. A 2012 article from the New York Times profiles Twitter’s chief attorney Alexander MacGillivray and his dedication to free speech online. MacGillivray left Twitter in 2013 just before their IPO; he was recently named U.S. Deputy Chief Technology Officer by President Obama.

Returning to Fleishman’s piece and the limitations on free speech, he qualifies his bold assertion with the statement that, “Anonymity is a powerful tool for free speech and democracy when countering the power of the state.” What he seems to overlook is that it’s not always clear when one might find oneself facing the power of the state, as with the subpoena from Tom Corbett or the April police raid ordered by Peoria Mayor Jim Ardis against the Twitter users behind a parody account.

Nor does the loss of anonymity necessarily stop a troll. While anonymous trolls seem to be the worst and most common offenders, abuse and harassment is common at Facebook, where users must disclose their identities and sockpuppet accounts are banned. On Twitter, not all harassers hide behind a mask. Observe noted literary critic and serial harasser Ed Champion, whose years-long pattern of abuse, stalking, and harassment may have come to a dramatic close in September when a critical mass of writers finally found the courage to call him on his behavior. Note also that Champion (who some have suggested is battling mental illness) treated his victims as if they were the ones harassing him, and even blamed them for driving him to suicide.

That sort of potential for abuse represents perhaps the largest challenge in preventing harassment, as the same trolls who manipulate Twitter’s signup process to proliferate sockpuppet accounts will surely abuse anti-abuse tools to harm and silence their victims. It’s hardly a hypothetical; at Facebook, where offensive content is identified through a crowdsourced reporting tool, users have seen their accounts banned for posting images of two men kissing, breastfeeding mothers, and proud breast cancer survivors, among many other topics.

A recent Pew study found that Twitter and Facebook users report vastly different experiences when it comes to political content, with Twitter users far more likely to see and engage in political messaging. The researchers cited differences in the nature of the platforms, and while they did not specify such, it’s not a stretch to imagine Twitter’s embracing of free and open dialog, and loose rules around anonymity, lend themselves better to more controversial topics. Twitter might limit harassment by giving administrators–or worse, fellow users–a stronger ability to censor what may be said, but the knowledge that moderators are watching may also chill users’ willingness to engage in hot topics, particularly when definitions of “abuse” and “harassment” are so subjective among online users.

A mechanism that permits a reader to voluntarily filter content seems friendlier to free speech. Even the most ardent free speech activist would never suggest there is any moral obligation for others to listen. One such solution, Block Together, offers the ability for users to share their block lists, and to filter by other criteria including the length of time a user has been online. By blocking all users who’ve been active less than 14 days, for instance, a person might reasonably expect to avoid most or all troll sockpuppets. The solution isn’t foolproof, however; by subscribing to the social mores of their peers, Block Together users may unwittingly participate in the erasure of marginalized voices, which would seem opposite to the intent of most anti-harassment activists.

Fleishman’s own solution, that Twitter institute a tiered system by which everyday users can have their ‘humanity’ verified, is a good one–though it raises questions around how that third tier, intended it seems as a sort of sockpuppet-ghetto, would be treated. Twitter has already developed a bit of a caste system, in which recommended accounts are almost all celebrities, and new users are encouraged to follow ten celebrity or corporate accounts on signup. As a forum for free speech, the further division of users into ‘tiers’ raises disturbing possibilities, even if it has positive motives.

No reasonable person would argue that online harassment is not a problem in need of immediate solutions. The challenge is in finding and developing solutions that do not limit or chill the free and open exchange of ideas that is the greatest power of the Internet. While some activists seem only too willing to dispense with free speech in the interest of preventing harassment, it falls to the online community to decide whether a global network like Twitter or Facebook has a greater obligation to provide a safe space for vulnerable users, and what measures are reasonable toward that end.

LEAVE A REPLY

Please enter your comment!
Please enter your name here