By Angela Hill
Akemi Bourgeois was floored a couple of weeks ago when members of a young mothers group labeled her an idiot and dimwit on their online comment board in response to a humorous blog post Bourgeois wrote about reconfiguring her car seats for her twins.
Avid video gamer Christopher Victa has seen an increasing barrage of nasty comments from fellow online gamers, attacking others for the way they play, or even just for their screen names, he said. "It's pretty sad," he added. "It's just a game."
PR executive Debra Bethard-Caplick recently unfriended several people on Facebookbecause of "rabid and virulent" personal attacks from friends of friends of friends who barge into conversations and try to start electronic shouting matches.
"You're always going to have someone who doesn't like what you do or say," she said. "But I can't believe if they were standing here looking at me they would say something so vile as they do. Somehow if it's on the Internet, it's OK."
Since the dawn of electronic communications, mean people have trolled the world of the Web, taking personal jabs at total strangers about everything from politics and movies to recipes and knitting circles, making outrageous, hurtful and sometimes bullying remarks -- especially under cover of anonymity.
But why do we get so mean just because we can't be seen?
"We behave in a different way when online. It's as if you're wearing a cloak or a mask and, well, you can get away with it," says Daniel Martin, associate professor of management at California State University East Bay and a visiting associate professor at Stanford University's Center for Compassion and Altruism Research and Education. He recently attended a science of compassion conference that brought together researchers from across the country to look at ways to improve human communications.
"Psychologists call it deindividuation," Martin said. "When in a mask or uniform or group, you cease to recognize even yourself as an individual and therefore don't see others that way, either, don't see how you're hurting someone."
Dacher Keltner, University of California, Berkeley social psychology professor and director of the Greater Good Science Center in Berkeley, agrees it's nothing new that humans are judgmental. "But we've become this hyper commenting society," he said. "In one sense, it's very old, the act of expressing opinions. In another sense, these new sites and online experiences have brought new dimensions with them. They take out the face-to-face aspect, even the voice-to-voice of a phone conversation.
"That's not to say there aren't a lot of good things going on out there in social media," he said. "But there are certainly trolls, too."
While e-meanies admittedly are in the minority in the vast openness and freedom of the online realm, Martin says, they seem to be lurking everywhere -- likely because more venues for such behavior are constantly popping up, more social networking sites, more comment boards, more places to vent, rant and roar.
Some psychologists and social media experts do say one way to reduce online conflicts and foster civil public discourse is to remind people that "they are who they are," Martin said, by encouraging them to use their real identities and combating the deindividuation.
"Basically, if I'm writing something, and I know my mom and my colleagues and my daughters are going to read it, I'm going to be on my best behavior," Keltner said.
Some sites are moving in that direction. YouTube recently announced on its blog an effort to get people who post comments on the video-sharing site to use their real names. Movie review aggregator site Rotten Tomatoes is examining its anonymous commenting policy after venomous, threatening comments about the "Batman" movie reviews in July forced the company to temporarily suspend its comment board.
Even Facebook has been exploring new ways to reduce online conflicts and cyber bullying with kinder, gentler language on various aspects of the site.
Recently it has been working with a team of researchers from Yale University and UC Berkeley, including Keltner and other scientists from the Greater Good Science Center, to come up with ways to promote compassionate communications. During the past five months, they've developed emotionally intelligent messages to replace user reporting options when someone is offended by a photo or comment -- so instead of "I don't like this photo," they've changed the reporting option to, "I don't like this photo because ...," then offering a series of choices such as "It's embarrassing," or "It makes me sad."
"We start from the assumption that Facebook is a lot like life, and life has conflicts, people with different goals, opinions," Keltner said. "And we work from there, trying to build in the wisdom of the social sciences.
"Technology is taking us in so many new places," he said. "But the need for the human dimension of compassion and kindness is greater than ever."