One silver lining to the increased attention that Gamergate has received is that a lot of worthwhile pieces have been written about online abuse, particularly as it targets women and other marginalized groups. I learned a lot from this piece by Amanda Hess detailing a conversation with an FBI agent about why it’s so hard to prosecute people who make threats and otherwise use the Internet illegally. Particularly striking to me was the FBI agent’s comment about the volume of work:
“It was never a matter of not caring . . . the volume of work coming in every day was absolutely staggering. We had to do triage, almost as if we were in a war zone, deciding which patients to treat first.”
I recently finished reading “Hate Crimes in Cyberspace,” an important new book by Danielle Keats Citron. I hope to write up some thoughts here in the coming weeks. For now, I simply want to recommend that everyone read the book. It’s compelling, thoughtful, and timely. And in the meantime, the Guardian has an excellent review by Katharine Quarmby. Here’s an excerpt:
In Sartre’s play his three unhappy characters are trapped, without an exit. But we have one. The law, Citron writes, has what she calls an “expressive value” – it helps us distinguish between right and wrong, and it can result in offenders being put behind bars. Site operators can remove the anonymity of trolls and delete abusive speech. But the heavy lifting comes down to us, trapped in the virtual room with one another.
My new Huffington Post piece argues that the Supreme Court’s decision in Riley v. California reveals a willingness to think about technology as both quantitatively and qualitatively different, with implications for the scope of Fourth Amendment protection. I consider how emerging technology might affect the way that courts construe other constitutional rights, too. Here, I focus on the First Amendment.
Recent events have caused me to think about the ethics of editorial discretion. In particular, how should authors, editors, and publishers take into account the harm caused by publicizing information about other people’s private lives?
Over the weekend, an online magazine made a very poor editorial choice. A writer for the magazine wrote a piece about the proper use of the term “bro.” The piece included the sentence: “And I just don’t think the diminutive label of ‘bro’ should be to describe more insidious sexism, let alone violent aggression like rape threats.” The words “rape threats” were hyperlinked to a single tweet by a female journalist.* The tweet was addressed directly to another person on Twitter, in which the journalist had used a variant of the word “bro” in briefly alluding to rape threats she had received. (For non-Twitter-users: when a tweet begins with the “@” symbol and the username of another person on Twitter, only the sender and recipient of the tweet, and any people who happen to follow both users, will see the tweet. Other people can then find the tweet, which is technically public, but doing so requires a specific search.)
When the magazine published the piece, the female journalist objected, understandably, on several grounds: (1) the piece suggested that she had talked about her own rape threats the “wrong way”; (2) the piece gratuitously drew attention to those rape threats in a way that would likely provoke more threats; (3) the piece alluded to her rape threats casually, like any other material that might be thrown into a piece to make a point; and (4) the piece made an example of something that she had chosen to keep mostly private and that was undoubtedly disturbing to her. Continue reading →