They Blinded Me with Science!
It’s my mom’s birthday! Happy birthday to her! She’s 30 years older than me, which doesn’t rudely reveal her exact age but gives a fair idea.
On to business. Tobold and Gevlon had a bit of a back and forth last week regarding player skill. As you should read for yourself (if you haven’t), Tobold argued along the lines of the Central Limit Theorem, arguing that most people are average, and thus when you have two dichotomous groups arguing with one another, there is, in fact, a huge silent majority in the middle, as I mentioned in my Dealing with Failing Management post. Gevlon on the other hand cites data that do not show a normal distribution curve.
I have huge respect for both these bloggers, but I’m going to have to go ahead and say that both are missing crucial elements in their argument, which is how they come to such different outcomes with a similar set of data (their gameplay experiences). Tobold’s not taking into account the interaction of skill with dedication (though he mentions both), nor does he factor in time, an important third axis that we’ll look at in a bit. Gevlon cites data that’s inherently flawed: household income data that’s vastly over-generalized and doesn’t account for a ton of relevant factors (number of workers in the household, level of education, geographical economic availability) but still lumps all the data together and dps from a single raid, which is simply too small a data set. Yet, while I’m simultaneously starting by saying they’re both a little wrong, in fact, they are both surprisingly in agreement and correct when you factor in those missing elements.
In reality, players in any game will trend towards average, but there are different sets of players and thus different opportunities for average outcomes. Gevlon’s essentially right about “good” and “bad,” but oversimplifies it (as he’s wont to do, which I’ve written about here before, as he loves a dramatic and impactful statement). Instead, we have a grid on three axes: time, dedication, and skill. Overall “bad” players are usually lacking in two or three of those areas, whereas overall “good” players are strong in two or three.
Each of these categories is an important factor in performance. The good players will have a high level of skill, which will show through their reflexes, accuracy, and timing. Poor players will behave more slowly and clumsily. Good players will also be dedicated enough – and this is the part where Gevlon is most often on point – to look up the best specs, rotations, and peripherals (gems, enchants, reforges, etc), but undedicated players may not. Lastly, a good player has to have enough time to learn their character, practice their skills, and time to do the research into the metagame. If they don’t, then their skills and dedication may appear to suffer when, in fact, they just don’t have enough free time to get those important things done.
I’m an English person, not a math person or an art person, so I’ll try to make an acceptable graphic to make my point. The bully graphic has shown up in odd places though, so maybe I’m better than I realized. Here we go:
So the “best” players would be in the foreground near the top right, as they have lots of skill, are dedicated enough to do their homework, and have enough time to practice. The worst players will be in the background near the vertex (I believe that’s the right word, but again, it’s been a while).
The beauty of this is that both Tobold and Gevlon are right. The law of averages hold true, but it holds true among the subgroups without necessarily being evenly distributed among all players. Those that have all 3, 2 of the 3, 1 of the 3, or none of the 3 are likely to form their own sets of averages. Additionally, because there are so many different combinations, a whole average for the entire playerbase won’t necessarily appear. Take Gevlon’s dps data: there’s two sets of folks there, clearly: those who know their role and those who don’t. If you split those two up, there are two bell curves, though the second is very steep.
I can even hypothesize about the economic data. If you look at the lower portion of the graph, the “blue collar” portion, you’ll see a pretty clear curve. The upper portion, though, does not show one. I suspect it’s because of the way the data is parsed: in “good paying” jobs, a $5,000 dollar margin isn’t going to mean nearly as much as in “worse paying” jobs. As a result, you get a rather flat distribution. If instead you clumped it into larger numbers, like you see at the end, I suspect you’d come up with a clearer curve, though I’m really able to test that simply due to my ignorance of how to do so (Gev, if you want to and then to get back to me, I’d be happy to highlight it). Try clustering by 10, 20, or 25k intervals; I bet you’d get the curve, then.
So the science in both arguments is, in fact, correct. There are even distributions, as Tobold claims, and there are also different subsets, as Gevlon claims. The problem is that the subsets will show a distribution that doesn’t appear over the entirety of the data. I see this all the time in my developmental classes. I don’t have that many C’s; I have more B’s and low D’s/high F’s. It’s because some students try, and often achieve a B, and other students don’t, and often get D’s or of high F’s (a D is a failing grade in my classes). There are a few A’s, C’s, and low F’s, but essentially my data has two bell curves, like the raid DPS: those who know their role as a student and those who don’t.
At any rate, both of those posts were quite interesting, and I like playing with data as a hobby, though I’m sure you can tell am woefully uninformed about it. Let me know what you think!
Stubborn (who can do simple math quickly and well but doesn’t understand specialized math at all)