The promise of social communities on legacy media websites seemed bright at first. Ideally, communities on media websites inform journalists, have reasoned debate on issues, and add to the value of content on media websites. Or at least that's what was supposed to happen. Most legacy media companies have comments and communities, and many are let's just say less than accommodating to reasoned debate. We all know what I mean by that. How did this happen? Is it fixable? Should it be fixed? What are others doing to combat these problems? How does this conflict with first amendment values? On the other hand, many website communities exist without these problems. How did they manage to come into being? How do they stay civil? How do they continue to actually live up to the promise of informing journalism, having reasoned debate, and adding to content value? This panel will explore methods sites use to deal with nutjobs as well as how to encourage and reward productive members in the community.
by Matt Haughey
After 11 years of running MetaFilter.com, I (and the other moderators) have been through just about everything, and we've built dozens of custom tools to weed out garbage, spammers, and scammers from the site.
I'll cover how to identify and solve problems including identity, trolling, sockpuppets, and other nefarious community issues, show off custom tools we've developed for MetaFilter, and show you how to incorporate them into your own community sites.
The Internet is a community of communities, all filled with conflict and drama. Social justice and activism are as filled with these clashes as any other group, but the wounds inflicted can be more than difference of opinion or personality discord: in “safe spaces”, tensions can be particularly fraught
These incidents can often be instructive and valuable. Conflict clarifies loyalties and solidifies friendships; conflict can reveal humility and pride. Controversy can teach anti-oppression activists about how to avoid unintentionally inflicting harm upon folks who do not share their privileges.
But while call-outs can be essential to honest discussions of inequality, drama is just as often destructive. Conflict comes at a price, sometimes with little payoff. Internet drama cost emotional energy, physical resources, time, and relationships. Blogwars, 500+ comment threads, and 140-character fights are rarely in anyone’s best interest – they are usually costly to the attacker, the target, and those reading on the sidelines.
Drama and conflict in online social justice is usually best minimized and carefully managed. This presentation, which will focus more on examination than instruction, is not just about how to check your privilege. It’s about when to call out, and how to avoid abusing others. It’s about how to respond, when to check out, and how to take care of yourself in a community that demands everything of you.
Online services tread a narrow line between enabling free speech and preventing abuse of members. Offline, harassment is often determined contextually; unfortunately, website owners and operators often lack the time, insight, and ability to determine the context surrounding a given behavior. Additionally, the speech itself may not be directly abusive; thus, identifying other vectors for abuse is becoming increasingly important. As a result, Del Harvey, the Director of Twitter's Trust and Safety department, has spent a significant amount of the past two years working to develop objective litmus tests for evaluating potentially abusive behavior in the absence of context. This presentation will draw upon the work done at Twitter as well as Del's previous background working with online safety advocates to provide practical and doable policies and suggestions for sites to utilize with a minimum of engineering investment and personnel needs.
11th–15th March 2011