The Development of Digital Morality
When did you learn that hurting others was wrong?
Do you remember a particular moment in your life when suddenly you became moral (assuming, of course, you are, dear reader)?
Child development psychologists suggest that as we age, we develop our views of right and wrong through many series of experiences where we do “right” or “wrong” things and observe the results. When it’s a “bad” behavior, we are “punished” in one way or the other, whether overtly, through actual, well, punishment, or implicitly, such as seeing disappointment in someone we care about’s eyes, or a strong internal feeling of shame, or a social stigma attached to us. Every little experience adds up to an overwhelming personal morality. In essence, every experience and no one experience matter, though some events may weigh more than others.
How does this translate to digital? We hear arguments that over-immersion in “fake” worlds makes it hard for people to differentiate between fiction and reality. I think, though, that this argument is completely backwards. I think we’re having a hard time figuring out how to create moral behavior in digital environments precisely because we’ve become so good at compartmentalizing our online and offline personalities.
Additionally, we’ve been taught from a young age that fictional violence is funny. How many of us were reared on violent cartoons? Our parents’ generation had their fears that Looney Toons and the like might make us violent people, so they sat us down and talked to us about the difference between Wile E. Coyote swallowing a stick of dynamite and hitting our brother in the face (I am an only child, for the record).
So what we have in digital environments is a total absence of the same negative responses we get in the real world when we’re bad mixed with the idea that fictional violence may be entertaining. One cannot observe disappointment in others’ eyes; there are no long-term social stigma thanks to renames and server switches, no real punishment (partially because Blizzard’s “penalty volcano” is uselessly forgiving and partially because a simple ban isn’t that much of a punishment), and only the possibility of internal shame based on a lot of factors including maturity, upbringing, and experience. So many of the external controls we had as children to teach us right and wrong are simply missing from a digital environment.
Still, while that might explain why learning online morality is so difficult, that doesn’t explain why there’s not more transferal from some people’s (jerks) offline to our online moral schemes. That’s where the compartmentalization comes in. Being able to recognize the difference between “real world” bad behavior and “fictional” bad behavior is incredibly important in the real world, so we compartmentalize “right” and “wrong” so we can laugh at slapstick comedy when appropriate but avoid replicating such slapstick violence in the workplace. So the problem, then, is that jerks haven’t realized that there’s a person on the other side of the computer, that, to them, “it’s just a game.”
I think that’s the hurdle that building online morality will have to overcome. In our discussions recently about better player communities through prosocial behaviors, I’ve loved a lot of what I’ve seen, but simultaneously had doubts about its effectiveness. As many commentators pointed out, kind people will do kind things, and cruel people won’t. If it’s because of the distance that the Internet provides, the ability to sit back and compartmentalize it like cartoon violence and that it’s just a game, those are the behaviors we need to be addressing. Incentivizing prosocial behaviors is great and I support it, but I don’t think it will inherently change people’s play.
I think the AoC murder points system attempted to do this, but twice failed miserably. The problem is that while this form of “punishment” might parallel a parental role in the offline world, it doesn’t carry the message of disappointment, nor a serious social stigma, nor create shame in the heart of the players. Instead, it functions like a score, something we as gamers always want to increase.
I don’t claim to have a solution to this problem; I’m only theoretically identifying what the problem actually is. Other than (yes, I’m falling back on my one example again) LoL’s tribunal system which actually utilizes the community’s standards of acceptable behavior, thus clearly indicating disappointment as well as causing a punishment, and which has shown to be effective through studies of repeat offenders (or, in this case, the surprising lack thereof), I’m simply not sure what else is out there. Thoughts? Ideas?
Stubborn (and pensive)