Reaching 95%-ile isn't very impressive because it's not that hard to do. I think this is one of my most ridiculable ideas. It doesn't help that, when stated nakedly, that sounds elitist. But I think it's just the opposite: most people can become (relatively) good at most things.
Note that when I say 95%-ile, I mean 95%-ile among people who participate, not all people (for many activities, just doing it at all makes you 99%-ile or above across all people). I'm also not referring to 95%-ile among people who practice regularly. The "one weird trick" is that, for a lot of activities, being something like 10%-ile among people who practice can make you something like 90%-ile or 99%-ile among people who participate.
This post is going to refer to specifics since the discussions I've seen about this are all in the abstract, which turns them into Rorschach tests. For example, Scott Adams has a widely cited post claiming that it's better to be a generalist than a specialist because, to become "extraordinary", you have to either be "the best" at one thing or 75%-ile at two things. If that were strictly true, it would surely be better to be a generalist, but that's of course exaggeration and it's possible to get a lot of value out of a specialized skill without being "the best"; since the precise claim, as written, is obviously silly and the rest of the post is vague handwaving, discussions will inevitably devolve into people stating their prior beliefs and basically ignoring the content of the post.
Personally, in every activity I've participated in where it's possible to get a rough percentile ranking, people who are 95%-ile constantly make mistakes that seem like they should be easy to observe and correct. "Real world" activities typically can't be reduced to a percentile rating, but achieving what appears to be a similar level of proficiency seems similarly easy.
We'll start by looking at Overwatch (a video game) in detail because it's an activity I'm familiar with where it's easy to get ranking information and observe what's happening, and then we'll look at some "real world" examples where we can observe the same phenomena, although we won't be able to get ranking information for real world examples1.
At 90%-ile and 95%-ile ranks in Overwatch, the vast majority of players will pretty much constantly make basic game losing mistakes. These are simple mistakes like standing next to the objective instead of on top of the objective while the match timer runs out, turning a probable victory into a certain defeat. See the attached footnote if you want enough detail about specific mistakes that you can decide for yourself if a mistake is "basic" or not2.
Some reasons we might expect this to happen are:
In Overwatch, you may see a lot of (1), people who don’t seem to care about winning, at lower ranks, but by the time you get to 30%-ile, it's common to see people indicate their desire to win in various ways, such as yelling at players who are perceived as uncaring about victory or unskilled, complaining about people who they perceive to make mistakes that prevented their team from winning, etc.3. Other than the occasional troll, it's not unreasonable to think that people are generally trying to win when they're severely angered by losing.
(2), not having put in time enough to fix their mistakes will, at some point, apply to all players who are improving, but if you look at the median time played at 50%-ile, people who are stably ranked there have put in hundreds of hours (and the median time played at higher ranks is higher). Given how simple the mistakes we're discussing are, not having put in enough time cannot be the case for most players.
A common complaint among low-ranked Overwatch players in Overwatch forums is that they're just not talented and can never get better. Most people probably don't have the talent to play in a professional league regardless of their practice regimen, but when you can get to 95%-ile by fixing mistakes like "not realizing that you should stand on the objective", you don't really need a lot of talent to get to 95%-ile.
While (4), people not understanding how to spot and fix their mistakes, isn't the only other possible explanation4, I believe it's the most likely explanation for most players. Most players who express frustration that they're stuck at a rank up to maybe 95%-ile or 99%-ile don't seem to realize that they could drastically improve by observing their own gameplay or having someone else look at their gameplay.
One thing that's curious about this is that Overwatch makes it easy to spot basic mistakes (compared to most other activities). After you're killed, the game shows you how you died from the viewpoint of the player who killed you, allowing you to see what led to your death. Overwatch also records the entire game and lets you watch a replay of the game, allowing you to figure out what happened and why the game was won or lost. In many other games, you'd have to set up recording software to be able to view a replay.
If you read Overwatch forums, you'll see a regular stream of posts that are basically "I'm SOOOOOO FRUSTRATED! I've played this game for 1200 hours and I'm still ranked 10%-ile, [some Overwatch specific stuff that will vary from player to player]". Another user will inevitably respond with something like "we can't tell what's wrong from your text, please post a video of your gameplay". In the cases where the original poster responds with a recording of their play, people will post helpful feedback that will immediately make the player much better if they take it seriously. If you follow these people who ask for help, you'll often see them ask for feedback at a much higher rank (e.g., moving from 10%-ile to 40%-ile) shortly afterwards. It's nice to see that the advice works, but it's unfortunate that so many players don't realize that watching their own recordings or posting recordings for feedback could have saved 1198 hours of frustration.
It appears to be common for Overwatch players (well into 95%-ile and above) to:
Overwatch provides the tools to make it relatively easy to get feedback, but people who very strongly express a desire to improve don't avail themselves of these tools.
My experience is that other games are similar and I think that "real life" activities aren't so different, although there are some complications.
One complication is that real life activities tend not to have a single, one-dimensional, objective to optimize for. Another is that what makes someone good at a real life activity tends to be poorly understood (by comparison to games and sports) even in relation to a specific, well defined, goal.
Games with rating systems are easy to optimize for: your meta-goal can be to get a high rating, which can typically be achieved by increasing your win rate by fixing the kinds of mistakes described above, like not realizing that you should step onto the objective. For any particular mistake, you can even make a reasonable guess at the impact on your win rate and therefore the impact on your rating.
In real life, if you want to be (for example) "a good speaker", that might mean that you want to give informative talks that help people learn or that you want to give entertaining talks that people enjoy or that you want to give keynotes at prestigious conferences or that you want to be asked to give talks for $50k an appearance. Those are all different objectives, with different strategies for achieving them and for some particular mistake (e.g., spending 8 minutes on introducing yourself during a 20 minute talk), it's unclear what that means with respect to your goal.
Another thing that makes games, at least mainstream ones, easy to optimize for is that they tend to have a lot of aficionados who have obsessively tried to figure out what's effective. This means that if you want to improve, unless you're trying to be among the top in the world, you can simply figure out what resources have worked for other people, pick one up, read/watch it, and then practice. For example, if you want to be 99%-ile in a trick-taking card game like bridge or spades (among all players, not subgroups like "ACBL players with masterpoints" or "people who regularly attend North American Bridge Championships"), you can do this by:
If you want to become a good speaker and you have a specific definition of “a good speaker” in mind, there still isn't an obvious path forward. Great speakers will give directly contradictory advice (e.g., avoid focusing on presentation skills vs. practice presentation skills). Relatively few people obsessively try to improve and figure out what works, which results in a lack of rigorous curricula for improving. However, this also means that it's easy to improve in percentile terms since relatively few people are trying to improve at all.
Despite all of the caveats above, my belief is that it's easier to become relatively good at real life activities relative to games or sports because there's so little delibrate practice put into most real life activities. Just for example, if you're a local table tennis hotshot who can beat every rando at a local bar, when you challenge someone to a game and they say "sure, what's your rating?" you know you're in shellacking by someone who can probably beat you while playing with a shoe brush. You're probably 99%-ile, but someone with no talent who's put in the time to practice the basics is going to have a serve that you can't return as well as be able to kill any shot a local bar expert is able to consitently hit. In most real life activities, there's almost no one who puts in a level of delibrate practice equivalent to someone who goes down to their local table tennis club and practices two hours a week, let alone someone like a top pro, who might seriously train for four hours a day.
To give a couple of concrete examples, I helped Leah prepare for talks from 2013 to 2017. The first couple practice talks she gave were about the same as you'd expect if you walked into a random talk at a large tech conference. For the first couple years she was speaking, she did something like 30 or so practice runs for each public talk, of which I watched and gave feedback on half. Her first public talk was (IMO) well above average for a talk at a large, well regarded, tech conference and her talks got better from there until she stopped speaking in 2017.
As we discussed above, this is more subjective than game ratings and there's no way to really determine a percentile, but if you look at how most people prepare for talks, it's not too surprising that Leah was above average. At one of the first conferences she spoke at, the night before the conference, we talked to another speaker who mentioned that they hadn't finished their talk yet and only had fifteen minutes of material (for a forty minute talk). They were trying to figure out how to fill the rest of the time. That kind of preparation isn't unusual and the vast majority of talks prepared like that aren't great.
Most people consider doing 30 practice runs for a talk to be absurd, a totally obsessive amount of practice, but I think Gary Bernhardt has it right when he says that, if you're giving a 30-minute talk to a 300 person audience, that's 150 person-hours watching your talk, so it's not obviously unreasonable to spend 15 hours practicing (and 30 practice runs will probably be less than 15 hours since you can cut a number of the runs short and/or repeatedly practice problem sections). One thing to note that this level of practice, considered obessive when giving a talk, still pales in comparison to the amount of time a middling table tennis club player will spend practicing.
If you've studied pedagogy, you might say that the help I gave to Leah was incredibly lame. It's known that having laypeople try to figure out how to improve among themselves is among the worst possible ways to learn something, good instruction is more effective and having a skilled coach or teacher give one-on-one instruction is more effective still5. That's 100% true, my help was incredibly lame. However, most people aren't going to practice a talk more than a couple times and many won't even practice a single time (I don't have great data proving this, this is from informally polling speakers at conferences I've attended). This makes Leah's 30 practice runs an extraordinary amount of practice compared to most speakers, which resulted in a relatively good outcome even though we were using one of the worst possible techniques for improvement.
My writing is another example. I'm not going to compare myself to anyone else, but my writing improved dramatically the first couple of years I wrote this blog just because I spent a little bit of effort on getting and taking feedback.
Leah read one or two drafts of almost every post and gave me feedback. On the first posts, since neither one of us knew anything about writing, we had a hard time identifying what was wrong. If I had some awkward prose or confusing narrative structure, we'd be able to point at it and say "that looks wrong" without being able to describe what was wrong or suggest a fix. It was like, in the era before spellcheck, when you misspelled a word and could tell that something was wrong, but every permutation you came up with was just as wrong.
My fix for that was to hire a professional editor whose writing I respected with the instructions "I don't care about spelling and grammar fixes, there are fundamental problems with my writing that I don't understand, please explain to me what they are"6. I think this was more effective than my helping Leah with talks because we got someone who's basically a professional coach involved. An example of something my editor helped us with was giving us a vocabulary we could use to discuss structural problems, the way design patterns gave people a vocabulary to talk about OO design.
Programming is similar to the real life examples above in that it's impossible to assign a rating or calculate percentiles or anything like that, but it is still possible to make significant improvements relative to your former self without too much effort by getting feedback on what you're doing.
For example, here's one thing Michael Malis did:
One incredibly useful exercise I’ve found is to watch myself program. Throughout the week, I have a program running in the background that records my screen. At the end of the week, I’ll watch a few segments from the previous week. Usually I will watch the times that felt like it took a lot longer to complete some task than it should have. While watching them, I’ll pay attention to specifically where the time went and figure out what I could have done better. When I first did this, I was really surprised at where all of my time was going.
For example, previously when writing code, I would write all my code for a new feature up front and then test all of the code collectively. When testing code this way, I would have to isolate which function the bug was in and then debug that individual function. After watching a recording of myself writing code, I realized I was spending about a quarter of the total time implementing the feature tracking down which functions the bugs were in! This was completely non-obvious to me and I wouldn’t have found it out without recording myself. Now that I’m aware that I spent so much time isolating which function a bugs are in, I now test each function as I write it to make sure they work. This allows me to write code a lot faster as it dramatically reduces the amount of time it takes to debug my code.
In the past, I've spent time figuring out where time is going when I code and basically saw the same thing as in Overwatch, except instead of constantly making game-losing mistakes, I was constantly doing pointlessly time-losing things. Just getting rid of some of those bad habits has probably been at least a 2x productivity increase for me, pretty easy to measure since fixing these is basically just clawing back wasted time. For example, I noticed how I'd get distracted for N minutes if I read something on the internet when I needed to wait for two minutes, so I made sure to keep a queue of useful work to fill dead time (and if I was working on something very latency sensitive where I didn't want to task switch, I'd do nothing until I was done waiting).
One thing to note here is that it's important to actually track what you're doing and not just guess at what you're doing. When I've recorded what people do and compare it to what they think they're doing, these are often quite different. It would generally be considered absurd to operate a complex software system without metrics or tracing, but it's normal to operate yourself without metrics or tracing, even though you're much more complex and harder to understand than the software you work on.
Jonathan Tang has noted that choosing the right problem dominates execution speed. I don't disagree with that, but doubling execution speed is still decent win that's independent of selecting the right problem to work on and I don't think that discussing how to choose the right problem can be effectively described in the abstract and the context necessary to give examples would be much longer than the already too long Overwatch examples in this post, maybe I'll write another post that's just about that.
Anyway, this is sort of an odd post for me to write since I think that culturally, we care a bit too much about productivity in the U.S., especially in places I've lived recently (NYC & SF). But at a personal level, higher productivity doing work or chores doesn't have to be converted into more work or chores, it can also be converted into more vacation time or more time doing whatever you value.
And for games like Overwatch, I don't think improving is a moral imperative; there's nothing wrong with having fun at 50%-ile or 10%-ile or any rank. But in every game I've played with a rating and/or league/tournament system, a lot of people get really upset and unhappy when they lose even when they haven't put much effort into improving. If that's the case, why not put a little bit of effort into improving and spend a little bit less time being upset?
Here are the ideas I've posted about that were the most widely ridiculed at the time of the post:
My posts on compensation have the dubious distinction of being the posts most frequently called out both for being so obvious that they're pointless as well as for being laughably wrong. I suspect they're also the posts that have had the largest aggregate impact on people -- I've had a double digit number of people tell me one of the compensation posts changed their life and they now make $x00,000/yr more than they used to because they know it's possible to get a much higher paying job and I doubt that I even hear from 10% of the people who make a big change as a result of learning that it's possible to make a lot more money.
When I wrote my first post on compensation, in 2015, I got ridiculed more for writing something obviously wrong than for writing something obvious, but the last few years things have flipped around. I still get the occasional bit of ridicule for being wrong when some corner of Twitter or a web forum that's well outside the HN/reddit bubble runs across my post, but the ratio of “obviously wrong” to “obvious” has probably gone from 20:1 to 1:5.
Opinions on monorepos have also seen a similar change since 2015. Outside of some folks at big companies, monorepos used to be considered obviously stupid among people who keep up with trends, but this has really changed. Not as much as opinions on compensation, but enough that I'm now a little surprised when I meet a hardline anti-monorepo-er.
Although it's taken longer for opinions to come around on CPU bugs, that's probably the post that now gets the least ridicule from the list above.
That markets don't eliminate all discrimination is the one where opinions have come around the least. Hardline "all markets are efficient" folks aren't really convinced by academic work like Becker's The Economics of Discrimination or summaries like the evidence laid out in the post.
The posts on computers having higher latency and the lack of empirical evidence of the benefit of types are the posts I've seen pointed to the most often to defend a ridicuable opinion. I didn't know when I started doing the work for either post and they both happen to have turned up evidence that's the opposite of the most common loud claims (there's very good evidence that advanced type systems improve safety in practice and of course computers are faster in every way, people who think they're slower are just indulging in nostalgia). I don't know if this has changed many opinion. However, I haven't gotten much direct ridicule for either post even though both posts directly state a position I see commonly ridiculed online. I suspect that's partially because both posts are empirical, so there's not much to dispute (though the post on discrimnation is also empirical, but it still gets its share of ridicule).
The last idea in the list is more meta: no one directly tells me that I should use more obscure terminology. Instead, I get comments that I must not know much about X because I'm not using terms of art. Using terms of art is a common way to establish credibility or authority, but that's something I don't really believe in. Arguing from authority doesn't tell you anything; adding needless terminology just makes things more difficult for readers who aren't in the field and are reading because they're interested in the topic but don't want to actually get into the field.
This is a pretty fundamental disagreement that I have with a lot of people. Just for example, I recently got into a discussion with an authority who insisted that it wasn't possible for me to reasonably disagree with them (I suggested we agree to disagree) because they're an authority on the topic and I'm not. It happens that I worked on the formal verification of a system very similar to the system we were discussing, but I didn't mention that because I don't believe that my status as an authority on the topic matters. If someone has such a weak argument that they have to fall back on an infallible authority, that's usually a sign that they don't have a well-reasoned defense of their position. This goes double when they point to themselves as the infallible authority.
I have about 20 other posts on stupid sounding ideas queued up in my head, but I mostly try to avoid writing things that are controversial, so I don't know that I'll write many of those up. If I were to write one post a month (much more frequently than my recent rate) and limit myself to 10% posts on ridiculable ideas, it would take 16 years to write up all of the ridiculable ideas I currently have.
Thanks to Leah Hanson, Hillel Wayne, Robert Schuessler, Michael Malis, Kevin Burke, Jeremie Jost, Pierre-Yves Baccou, Veit Heller, Jeff Fowler, Malte Skarupe, David Turner, Akiva Leffert, Lifan Zeng, John Hergenroder, Wesley Aptekar-Cassels, Chris Lample, Julia Evans, Anja Boskovic, Vaibhav Sagar, Sean Talts, Valentin Hartmann, Sean Barrett, Kevin Shannon, Enzo Ferey, and an anonymous commenter for comments/corrections/discussion.>
The choice of Overwatch is arbitrary among activities I'm familiar with where:
99% of my gaming background comes from 90s video games, but I'm not going to use those as examples because relatively few readers will be familiar with those games. I could also use "modern" board games like Puerto Rico, Dominion, Terra Mystica, ASL etc., but the set of people who played in rated games is very low, which makes the argument less convincing (perhaps people who play in rated games are much worse than people who don't -- unlikely, but difficult to justify without comparing gameplay between rated and unrated games, which is pretty deep into weeds for this post).
There are numerous activities that would be better to use than Overwatch, but I'm not familiar enough with them to use them as examples. For example, on reading a draft of this post, Kevin Burke noted that he's observed the same thing while coaching youth basketball and multiple readers noted that they've observed the same thing in chess, but I'm not familiar enough with youth basketball or chess to confidently say much about either activity even they'd be better examples because it's likely that more readers are familiar with basketball or chess than with Overwatch.[return]
When I first started playing Overwatch (which is when I did that experiment), I ended up getting rated slightly above 50%-ile (for Overwatch players, that was in Plat -- this post is going to use percentiles and not ranks to avoid making non-Overwatch players have to learn what the ranks mean). It's generally believed and probably true that people who play the main ranked game mode in Overwatch are, on average, better than people who only play unranked modes, so it's likely that my actual percentile was somewhat higher than 50%-ile and that all "true" percentiles listed in this post are higher than the nominal percentiles.
Some things you'll regularly see at slightly above 50%-ile are:
Having just one aspect of your gameplay be merely bad instead of atrocious is enough to get to 50%-ile. For me, that was my teamwork, for others, it's other parts of their gameplay. The reason I'd say that my teamwork was bad and not good or even mediocre was that I basically didn't know how to play the game, didn't know what any of the characters’ strengths, weaknesses, and abilities are, so I couldn't possibly coordinate effectively with my team. I also didn't know how the game modes actually worked (e.g., under what circumstances the game will end in a tie vs. going into another round), so I was basically wandering around randomly with a preference towards staying near the largest group of teammates I could find. That's above average.
You could say that someone is pretty good at the game since they're above average. But in a non-relative sense, being slightly above average is quite bad -- it's hard to argue that someone who doesn't notice their entire team being killed from behind while two teammates are yelling "[enemy] behind us!" over voice comms isn't bad.
After playing a bit more, I ended up with what looks like a "true" rank of about 90%-ile when I'm using a character I know how to use. Due to volatility in ranking as well as matchmaking, I played in games as high as 98%-ile. My aim and dodging were still atrocious. Relative to my rank, my aim was actually worse than when I was playing in 50%-ile games since my opponents were much better and I was only a little bit better. In 90%-ile, two copies of myself would probably lose fighting against most people 2v1 in the open. I would also usually lose a fight if the opponent was in the open and I was behind cover such that only 10% of my character was exposed, so my aim was arguably more than 10x worse than median at my rank.
My "trick" for getting to 90%-ile despite being a 1/10th aimer was learning how the game worked and playing in a way that maximized the probability of winning (to the best of my ability), as opposed to playing the game like it's an FFA game where your goal is to get kills as quickly as possible. It takes a bit more context to describe what this means in 90%-ile, so I'll only provide a couple examples, but these are representative of mistakes the vast majority of 90%-ile players are making all of the time (with the exception of a few players who have grossly defective aim, like myself, who make up for their poor aim by playing better than average for the rank in other ways).
Within the game, the goal of the game is to win. There are different game modes, but for the mainline ranked game, they all will involve some kind of objective that you have to be on or near. It's very common to get into a situation where the round timer is ticking down to zero and your team is guaranteed to lose if no one on your team touches the objective but your team may win if someone can touch the objective and not die instantly (which will cause the game to go into overtime until shortly after both teams stop touching the objective). A concrete example of this that happens somewhat regularly is, the enemy team has four players on the objective while your team has two players near the objective, one tank and one support/healer. The other four players on your team died and are coming back from spawn. They're close enough that if you can touch the objective and not instantly die, they'll arrive and probably take the objective for the win, but they won't get there in time if you die immediately after taking the objective, in which case you'll lose.
If you're playing the support/healer at 90%-ile to 95%-ile, this game will almost always end as follows: the tank will move towards the objective, get shot, decide they don't want to take damage, and then back off from the objective. As a support, you have a small health pool and will die instantly if you touch the objective because the other team will shoot you. Since your team is guaranteed to lose if you don't move up to the objective, you're forced to do so to have any chance of winning. After you're killed, the tank will either move onto the objective and die or walk towards the objective but not get there before time runs out. Either way, you'll probably lose.
If the tank did their job and moved onto the objective before you died, you could heal the tank for long enough that the rest of your team will arrive and you'll probably win. The enemy team, if they were coordinated, could walk around or through the tank to kill you, but they won't do that -- anyone who knows that will cause them to win the game and can aim well enough to successfully follow through can't help but end up in a higher rank). And the hypothetical tank on your team who knows that it's their job to absorb damage for their support in that situation and not vice versa won't stay at 95%-ile very long because they'll win too many games and move up to a higher rank.
Another basic situation that the vast majority of 90%-ile to 95%-ile players will get wrong is when you're on offense, waiting for your team to respawn so you can attack as a group. Even at 90%-ile, maybe 1/4 to 1/3 of players won't do this and will just run directly at the enemy team, but enough players will realize that 1v6 isn't a good idea that you'll often 5v6 or 6v6 fights instead of the constant 1v6 and 2v6 fights you see at 50%-ile. Anyway, while waiting for the team to respawn in order to get a 5v6, it's very likely one player who realizes that they shouldn't just run into the middle of the enemy team 1v6 will decide they should try to hit the enemy team with long-ranged attacks 1v6. People will do this instead of hiding in safety behind a wall even when the enemy has multiple snipers with instant-kill long range attacks. People will even do this against multiple snipers when they're playing a character that isn't a sniper and needs to hit the enemy 2-3 times to get a kill, making it overwhelmingly likely that they won't get a kill while taking a significant risk of dying themselves. For Overwatch players, people will also do this when they have full ult charge and the other team doesn't, turning a situation that should be to your advantage (your team has ults ready and the other team has used ults), into a neutral situation (both teams have ults) at best, and instantly losing the fight at worst.
If you ever read an Overwatch forum, whether that's one of the reddit forums or the official Blizzard forums, a common complaint is "why are my teammates so bad? I'm at [90%-ile to 95%-ile rank], but all my teammates are doing obviously stupid game-losing things all the time, like [an example above]". The answer is, of course, that the person asking the question is also doing obviously stupid game-losing things all the time because anyone who doesn't constantly make makor blunders wins too much to stay at 95%-ile. This also applies to me.
People will argue that players at this rank should be good because they're better than 95% of other players, which makes them relatively good. But non-relatively, it's hard to argue that someone who doesn't realize that you should step on the objective to probably win the game instead of not touching the objective for a sure loss is good. One of the most basic things about Overwatch is that it's an objective-based game, but the majority of players at 90%-ile to 95%-ile don't play that way.
For anyone who isn't well into the 99%-ile, reviewing recorded games will reveal game-losing mistakes all the time. For myself, usually ranked 90%-ile or so, watching a recorded game will reveal tens of game losing mistakes in a close game (which is maybe 30% of losses, the other 70% are blowouts where there isn't a single simple mistake that decides the game).
It's generally not too hard to fix these since the mistakes are like the example above: simple enough that once you see that you're making the mistake, the fix is straightforward because the mistake is straightforward.[return]
There are probably some people who just want to be angry at their teammates. Due to how infrequently you get matched with the same players, it's hard to see this in the main rated game mode, but I think you can sometimes see this when Overwatch sometimes runs mini-rated modes.
Mini-rated modes have a much smaller playerbase than the main rated mode, which has two notable side effects: players with a much wider variety of skill levels will be thrown into the same game and you'll see the same players over and over again if you play multiple games.
Since you ended up matched with the same players repeatedly, you'll see players make the same mistakes and cause themselves to lose in the same way and then have the same tantrum and blame their teammates in the same way game after game.
You'll also see tantrums and teammate blaming in the normal rated game mode, but when you see it, you generally can't tell if the person who's having a tantrum is just having a bad day or if it's some other one-off occurrence since, unless you're ranked very high or very low (where there's a smaller pool of closely rated players), you don't run into the same players all that frequently. But when you see a set of players in 15-20 games over the course of a few weeks and you see them lose the game for the same reason a double digit number of times followed by the exact same tantrum, you might start to suspect that some fraction of those people really want to be angry and that the main thing they're getting out of playing the game is a source of anger. You might also wonder about this from how some people use social media, but that's a topic for another post.[return]
For example, there will also be players who have some kind of disability that prevents them from improving, but at the levels we're talking about, 99%-ile or below, that will be relatively rare (certainly well under 50%, and I think it's not unreasonable to guess that it's well under 10% of people who choose to play the game). IIRC, there's at least one player who's in the top 500 who's deaf (this is severely disadvantageous since sound cues give a lot of fine-grained positional information that cannot be obtained in any other way), at least one legally blind player who's 99%-ile, and multiple players with physical impairments that prevent them from having fine-grained control of a mouse, i.e., who are basically incapable of aiming, who are 99%-ile.
There are also other kinds of reasons people might not improve. For example, Kevin Burke has noted that when he coaches youth basketball, some children don't want to do drills that they think make them look foolish (e.g., avoiding learning to dribble with their off hand even during drills where everyone is dribbling poorly because they're using their off hand). When I spent a lot of time in a climbing gym with a world class coach who would regularly send a bunch of kids to nationals and some to worlds, I'd observe the same thing in his classes -- kids, even ones who are nationally or internationally competitive, would sometimes avoid doing things because they were afraid it would make them look foolish to their peers. The coach's solution in those cases was to deliberately make the kid look extremely foolish and tell them that it's better to look stupid now than at nationals.[return]