Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What the Dunning-Kruger effect is and isn’t (2010) (talyarkoni.org)
99 points by nabla9 on Dec 27, 2020 | hide | past | favorite | 68 comments


What was really enlightening for me was that DKE is really just plotting two lines, perceived ability and actual ability.

Now if people were perfectly accurate those lines would be drawn on top of each other.

But because either people underestimate their differences or correctly guess their true ability which the test doesn't full capture. Let's make the perceived ability line flatter.

People tend to overestimate their own ability, so we should move the perceived ability line up which leads to an intersection on the right. Causing us to find that the best performers are the most accurate, explaining DKE.

But the harder we make a task the more we push that perceived ability line down. And if we make a task sufficiently hard then the worst performers are the most accurate judge of their own ability showing the opposite of DKE, due to the intersection being pushed left.

So really DKE is an artifact.


Imagine a graph where the x-axis is competence and the y-axis is self-perceived competence.

Quadrant II of that graph is Dunning-Kruger. Quadrant IV is imposter syndrome. Quadrants I and III aren’t talked about but I suspect most people, most of the time, end up in these quadrants if you ask them about their ability to repair helicopter engines or perform open-heart surgery.


"Almost no one can perform open heart surgery well" and "I am better then the average person at open heart surgery" are fully consistent personal opinions one could have.


Why so?

> But the harder we make a task the more we push that perceived ability line down.

That's only true if people misperceive how hard the task is across the population. Or if the measurement is trivialized because the distribution of actual performance is ovwrwhelemedr by noise.


The third section of the article titled "3. The instrumental role of task difficulty." goes into how "people misperceive how hard the task is across the population" based on task difficulty, and quotes a good amount of research that shows this is true.

It seems pretty intuitive why people would use task difficulty as a heuristic for relative performance.


The way I interpret DKE is that low level people don’t (yet) know what they don’t know.

One thing that I think often gets skipped over in discussions on Imposter Syndrome is that imposters DO exist (that’s kind of why we use interviews when hiring people).


>> The way I interpret DKE is that low level people don’t (yet) know what they don’t know.

Exactly, it's like people who think the company CEO is a useless idiot because they don't see him writing React code or whatever their limited definition of "adding value" is.

Another example I see on HN - whenever a hiring thread comes up, people chime in how stupid whiteboard interviews are, and surely the poster would be a super-star at a FAANG if only they weren't gate-kept by these dumb interviews. Isn't it more likely that these super-successful companies use these interviews because they end up hiring the people they want to hire, and it's you, the poster, lacking a clue as to what that looks like?

Basically the heuristic is, if something/someone is successful and it seems stupid to you, start by assuming it's you who likely has something to learn.


Glad you brought up the hiring threads on HN. Recently I've felt so many of these threads have devolved into "these stupid companies using whiteboard hiring are all a bunch of idiots!" that I usually stop commenting.

What's most illuminating about those kinds of comments is they rarely even try to view thing from the hiring company's perspective. Having been in that boat, you'll find:

1. The cost of a mishire is huge. And, even worse than hiring someone who is downright awful is hiring someone who is just plain mediocre, or slow. I mean, I've hired diligent, hard-working, otherwise smart people who could only get stuff out the door at about a third the rate of other devs. Having a performance discussion with those folks can be extremely difficult, because oftentimes there is little they can realistically do to go faster.

2. Experience can have little correlation with proficiency. I've interviewed people who were "senior engineers" and "architects" at major corporation who couldn't code FizBuzz, moreover they could barely code a syntactically correct function in their chosen language.

3. Many companies want to diversify their employee base by going outside just friends/referrals and well known college programs, but to "take a chance" on other folks means they need to get a strong signal during the interview process.

I'm not saying there aren't problems in hiring, but given how there is a giant economic incentive to make hiring as accurate and efficient as possible it's at least valuable to attempt to understand why a company might interview the way it does.


Hiring threads are a microcosm of the classic forum disease: people who actually know what they’re talking about are busy doing it for a living, while the malcontents and loudmouths have all day to complain on the internet. HN is one of the better forums when it comes to avoiding that outcome but still not perfect.


Number 1 wouldn't have to be a problem if we changed the social contract. The literal contract (nearly always) states your employment is at will. You can be let go at any time, for any reason or no reason at all. Employers should be more willing to (1) take a risk on a hire, and (2) let them go if it doesn't work out.

Then, hiring doesn't have to be this grandiose, talk to 20 people over 2 days in 20 30-minute session, but actually the VP has a final call, sort of decisions.


JMTQp8lwXL, LOL. So your proposal is instead of thoroughly interviewing, just give jobs out randomly and then fire people?

That works aside for the fact that it's horrible for everyone:

- The employee: they presumably quit another job to take this one and now they are out on their ass because the employer didn't bother to test for proper fit.

- The team: they invest in training the new person, get to like them, only to have them fired. Meanwhile they are short a person as hiring effort has to be restarted so they lose a ton of time understaffed.

- The company: dealing with all this terrible churn, stopping and starting the hiring effort, is there still severance in this world? How does unemployment work?

This is just the craziest proposal I've ever heard. It's like saying "dating is too onerous, people should just marry randos and not hesitate to get divorced if it turns out the other person brushes their teeth in an annoying way."


I certainly don't think this is the craziest proposal I've ever heard. Netflix, for example, is quite transparent that they have a "hire and fire" culture, and I for one wish that many tech companies would be more willing to have open, honest conversations with their employees and ensure everyone is up to whatever bar they set. I've seen it become incredibly demoralizing for the rest of the team when subpar performers don't get let go, and I've also seen rather disingenuous uses of "layoffs" as an excuse to finally cut people who should have been cut long ago.

Still, there are practicalities that can make "hire fast, fire fast" difficult in the real world. First, many countries outside the US have much more onerous requirements to fire someone. Even in the US, it's quite trivial for any employee to take legal action if they think they've been let go unfairly, so most HR departments will require many months of detailed documentation (e.g. email conversation, performance reviews, PIPs) before someone can be let go.


I by no means advocate for keeping bad employees around. quite the opposite.

My point is that there's no reason to somehow induce your company to have more bad employees by watering down interviewing. Literally no upside.


I think you may be mis-representing what I am saying. You can generally sense if someone could be a possibly good fit in 20 minutes of conversation. The ROI of an additional 500 minutes of conversation doesn't change the error bars. They could be great leetcoders, but not a good colleague. Hopefully the first 20 minutes gave you a sense of their soft skills, but even those can be mis-represented in small bursts that are un-representative of how the individual typically communicates on the job.

You're saying I am saying we should randomly give jobs. I am not. I am saying it should be, at max, a 1 hour phone call. Background checks can verify past employment and job titles. Just talk to them about work matters. Ask a fizzbuzz if you must. No additional data will meaningfully tell you much more anyways.


> You can generally sense if someone could be a possibly good fit in 20 minutes of conversation.

The last position I hired for had 700 applicants. Paper filtered down to 75. Interview screened down to 30. Detailed interviews with 5.

How would you propose this work? 30 people seemed like a possibly good fit after 20 minutes. Should we randomly hire one? Should we hire all and have them fight it out madmax thunder dome style?

I think if the position has low startup time and is assembly line work or something we could hire all 30 and fire the poor performers. Outside of tech support, I’m not aware of any positions like this.

I’ve worked with managers who thought programmers were like this and would hire 10 or 20 at a time to see who stuck. It was very frustrating as an existing employee because they all had to be trained, reviewed, etc. After a year of arguing, I (and every other senior, over about two years) left that company.


I think I need to update my mental models around the tech job market here, and what the interview process is trying to achieve. To take this example further: of the 700 initial applicants, how many would be _adequate_ for the position? You had the opportunity to have quite detailed interviews from the pick of the bunch - is the goal to find the very best of the lot because you can afford to be picky; or there are genuinely only 3-4 in the 700 who could _adequately_ do the job?

My preexisting mental model is that from the 700 applicants, 30 could have filled the position adequately, with perhaps a ~5% gain across them. Each additional step in the interview process yields increasingly diminishing returns but is still worth it because ... you can choose to be picky in this market. The risks associated with a mishire are also presumably greatly reduced with each diminishingly selective step. Or is it that there genuinely exists only 2-3 people in that pool who are a good fit for the role, and you have little choice but to engage in kissing a whole load of frogs to find your prince?


I’m not sure how many could adequately do the job. Generally because the phone screen doesn’t give a good sense of this so the 30 I talked with maybe 5-10 of them could potentially do the job adequately.

We only extended an offer to one person and maybe a second if the first declined. If both of those two declined then we wouldn’t have offered to the other 3 detail interviewees.

I’d certainly like a better way that uses less time. But it does seem that I need to kiss a lot of frogs.

Also, the goal isn’t to get an “adequate” person but to get a really good or great person. I think that a good person can do 5x more than an ok person, maybe even more. So for this position, I’d rather keep looking than just get a body.

Again, if I just needed assembly line workers or warm bodies that could bill in a consultant sweatshop, that’s a different story. But I don’t want to be in such a hiring position.


>I think that a good person can do 5x more than an ok person, maybe even more

This is really surprising to me, and I'm not sure how this can be the case. I would imagine that this would be true of a high level position at the top of their game, able to define their own schedules & goals. For a typical worker with clearly defined short & long term objectives defined by the organization, how does one become 5X more productive? Clearly my heuristics about the tech industry are way out of step with reality.

However, I could see something like this happening in my own field: academia. As a bioinformatics postdoc, the bread & butter work most of my colleagues do is routine, so they can hit their goals fairly predictably. My project, in comparison, is mostly ad-hoc, struggling to parse a novel dataset right on the edge of what is technically possible. My productivity is in the toilet, and I can imagine a different researcher in my position being 5-10x more productive. This is not how I imagined the tech industry working, though.


I used to do a lot of programming and the multiples are really high, especially since bad programmers can have negative productivity (kind of like Hammerstein-Equord’s stupid and industrious quadrant).

I remember a McKinsey study from the 80s or 90s that talked about 10x productivity but I can’t find it so maybe I’m misremembering. I don’t think people who actually try to measure this succeed well. I run away from anyone trying to precisely measure programmer productivity because it usually means some point haired boss will try to optimize on lines of code or something stupid.

So for me, it’s a bit of a hunch but a strong hunch. I’ve worked in shops of 100 devs where a single dev wrote the entire authentication stack that teams couldn’t handle. And I have lots more stories like this.

Nowadays I do “strategy” and I’d say the multiple is more than 10x in that there’s usually some magic mix to a good strategy that can’t be accomplished with giant committees and tons of hours. When it comes to creative tasks, I think the productivity leap from “ok” to “good” and from “good” to “super” is really high. Not every job is like this, but I think these are the most fun and so try to go towards them and away from commodity work.

I have limited experience in academia but have seen authors quickly crank out really useful papers where teams have been working for months. I suspect that’s more about just having good alignment of capability and need instead of some magic or intrinsic productivity power.

Personally my productivity is pretty sucky so I’m in a situation where I don’t meet my criteria but am lucky enough to get to work on cool stuff.


Hmm. The way I see benchwork scientists being 'productive' is an endless slog - your cultures grow at their own schedule, and don't respect the boundaries of the work-life balance. Of course, this isn't true for everybody, but was one of the reasons pushing me away from benchwork towards bioinformatics. Our work also tends to be much more insular, gnawing away for years on our own little piece of the problem. I suppose I'm personally at a fairly "creative" point in my career - I have absolute free reign in my study, and my PI encourages it.


An updated perspective on how to think about "10x talent" in IT: https://sloanreview.mit.edu/article/its-time-to-reset-the-it...


> Should we randomly hire one?

If the additional interviewing time doesn't produce real improvement in outcomes over random hiring, then yes, don't waste the interviewees’ time and time the form is paying you for doing it.


My problem is that I don’t know of a good way to determine what improvement. I think it’s hard to design a study and measure.

And I think there’s a deontological vs. utilitarian aspect as I think if applicants learned I was randomly picking 1 of 30 pretty goods it would discourage high quality applicants.

For me, I think the additional 100 hours interviewing helps and is worth it.

I’d be interested in hearing of hiring managers and orgs who just hire randomly from minimally qualified applicants. I hear from lots of applicants who claim they are minimally qualified that this is a successful strategy. But, naturally, they are a bit biased and not very useful for deciding how to hire people.


I also don't understand the assumption he/she makes that the additional interview time doesn't help. Of course it helps.


I'm sure the additional interview time helps - how can it not? I suppose the question is - is it _necessary_ to find a suitable candidate?


In this and your previous post, you greatly under-appreciate the cost and disruption of a bad hire. Not to mention the time lost bringing a new hire up-to-speed on the company's way of doing things. A bad hire can be very expensive and disruptive.

If a few more hours of interviews can avoid a bad hire, I think almost all experienced managers would opt for the extra interviews.


If your work culture allowed you to remove bad hires, then the company would not experience the 'cost and disruption'. The problem isn't the time invested getting employees up to speed: you'll do that with good or bad employees. The problem is, bad employees stay, and often for long periods of time. Have you experienced the bureaucratic nightmare of formally dismissing someone? It's something we all would like to avoid, but clearly today's filtering is ineffective if our organizations are filled with dead weight.


> If a few more hours of interviews can avoid a bad hire

Can it, though?

I'm sure it can increase the manager subjective feeling of confidence, but does it verifiably, objectively produce better results?


I've uncovered craziness in candidates late in the process when they are no longer guarded about what they're saying. So, for myself, it has indeed produced objectively better results. I expect that most seasoned managers have had similar experiences.


But how whiteboard interviews help (1)? I see them as unconnected and would prefer for that (for example) giving a sample assignment for 2 hours on/off-site


I hope this insight helps you and others because it's key:

What does an off-site sample assignment tell the company? That you can write code when given a spec. That's great, but that's insufficient for what they are hiring for.

What it doesn't tell them:

- Can you elicit clarity on a vaguely stated problem? These are conversations that top engineers have with their business counterparts and other technologists constantly.

- Can you iterate with another person on a solution. The whiteboarding exercise is a dialogue, and while contrived, problems are often solved in joint manner like that.

- How do you do under pressure? Something is going wrong and it's affecting millions of users. Can you carry your weight when the team is fire fighting?

Obviously the whiteboard isn't perfectly correlated with the above, but it's as good an indicator as I can think of. Saying "just let me write something off-line" means people are oblivious to these other key attributes. FAANG devs aren't earning 300K a year for just being able to write code on a spec.


> Isn't it more likely that these super-successful companies use these interviews because they end up hiring the people they want to hire...?

I’m not sure it’s that these companies only hire the people they want to hire. It’s probably that highly successful companies know their interview methods don’t work 100% of the time, but they’re willing to tolerate the false negatives of not hiring a few good people if it means all those they do hire meet some minimum standards. And their current system is the best they know how to do.

We arrogantly post on Hacker News: “Oh look, this interview method sucks! It will miss out on this great candidate!”. Ok then, make your own trillion dollar company with a better interview method that filters out the hordes of bullshit artists while simultaneously never missing out on the best people. Filtering out the bullshit artists is more important than hiring every single productive person. In fact, if you hire too many bullshit artists, all the good people will get fed up and leave anyway.


It's still a frustrating experience for candidates though. If you were to split the candidates into three buckets, with the first bucket being people who would never pass a FAANG interview (can't code their way out of a wet paper bag), people who can pass a FAANG interview but not reliably, and people who can reliably pass FAANG interviews (PhD in computer science or just leetcode grinder), most of the people in the first two buckets would complain about the interview process.

As someone in the second bucket, the process feels rather random (eg. passed the Facebook L6 interview but failed the Google L4 one somehow, passed the onsite at half a dozen other companies), which is mostly because it is. Maybe some interviewer really wanted to see a topological sort algorithm implemented in 20 minutes and there's not enough time in the interview to derive it from first principles, or maybe an interviewer felt grumpy that day, and then that's it, better luck next time. Combined with the policy of not telling candidates what to improve on, it's a pretty frustrating experience.

Yes, there are reasons for the company to behave that way, but it's still unpleasant to be on the receiving end of it.


Joel Spolsky has a very good (now old) blog post about this: https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid...

Basically the argument is that it is much better to miss out on a good person than hire a mediocre/bad person.


Although I agree in general, the example you've chosen might not be the best one. This might be one of (numerous) areas in business where success is not measured correctly and the methods used are never really validated.

Do human resources departments actually evaluate their hiring criteria and the tests they use in experimental designs, e.g. by hiring two sufficiently large groups A and B based on different interview methods and checking a year later how the groups performed relative to each other?

Bear in mind that there are plenty of people who believe that they're good at doing something just because they have been doing it for a long time. Without an independent method for confirming such claims they are practically useless.


There's two things at play here.

First, candidates who complain about the whiteboard interviews uniformly complain about it "not being a good representation of how I can write code" which betrays a lack of understanding of the other skills besides writing code that these firms care about. People who understand more can connect the interview type to the other relevant attributes, so it's already clear who's more right.

Second, yes, companies do a shitload of triangulation on recruiting performance including looking many years out.


In theory, I like your idea, but in practice, I think corporations large enough to run experiments like that are not able to objectively evaluate anything they do, at least not officially. It's too political. I've been in dozens of interviews as an interviewer and worked with at least a dozen people after seeing how they performed in interviews, so that's what I go on.


The big software firm I worked at (not FAANG but we competed with them for talent) absolutely tweaked its interview processes based on long run outcomes.

FAANGS evolve their process over time too.

One clear example - at FAANGS even if the manager likes you, it doesn't matter. You have to be evaluated by a committee of people who're indifferent to your manager, aren't under the gun to fill a slot and are less likely to be swayed by a one-time positive conversation.

Do you think this process came out of nowhere? That's just one example of them settling on something that worked, I am sure having previously tried other less structured approaches and tracking the results.


Your last sentence made me think about Hanlon's Razor[0]

[0] https://en.m.wikipedia.org/wiki/Hanlon%27s_razor


> Isn't it more likely that these super-successful companies use these interviews because they end up hiring the people they want to hire, and it's you, the poster, lacking a clue as to what that looks like?

No it's more likely that the people staffing FAANG companies absolutely suck at interviewing and vetting.


> The way I interpret DKE is that low level people don’t (yet) know what they don’t know.

That's obviously a part of it, but if that was all of it, you would expect accuracy of assessment of relative ability to consistently get better with relative ability.

But, in fact, the result is basically “everyone seems themselves as closer to the 70th percentile than they actually are”.

Everyone focusses on the low end of DKE to use it as a low-brow dismissal, but the fact that people at the top of the distribution tend to underestimate their relative ability is also interesting.

> imposters DO exist (that’s kind of why we use interviews when hiring people).

Yes, the presence of imposters in hiring and hiring-policy-setting positions is why we use assessment tools that don't work very well to evaluate candidates.


Yes, the wikipedia quote seems incomplete "The miscalibration of the incompetent stems from an error about the self, [...]"

The source of miscalibration is from the estimation of the subject area and the assessment of how much of that has been learned. I suspect that the error in the estimation of the subject space is the much larger source of error.


The whole article was about how that's probably not the case and it's better explained by other factors.


One application I don't see discussed much is how this is useful for personal learning. Knowing the effect exists is useful when learning about a totally new topic: you know you need to hit that first peak, so to speak, to figure out (roughly) the scope of what there is to know. This helps avoid actually having that false confidence early on.

One of the most memorable examples of this for me was early on doing calculus in high school. There was a time where I was like "ok, yeah, I understand all the maths stuff now," then I hit that first peak and suddenly realized "oh wow, I know only a tiny, tiny fraction of what there is to know!" When I later learned about Dunning-Kruger it resonated because it described what I had experienced years earlier and had being thinking about since.


For me the big eye-opener was learning to play traditional Irish dance music on whistle. About two years into the process, I thought I was great. I could read sheet music, I could play fast, I could play rolls, I knew a couple hundred tunes! Clearly I had to just keep it up at the same rate a few more years and I'd be a master.

And then, about two years after that, it gradually sunk in that I'd been missing everything about what the music was really about. I hadn't been able to pick out what made good playing good, so I couldn't tell that my playing wasn't. I had been focusing on a bunch of superficial stuff and missing the heart of the thing.

And now 20+ years on, I am vastly better at it than I was when I thought I was great. But I know there are loads of people much better. And I'm constantly worried there's some other epiphany waiting to happen when I will realize I'm still missing something absolutely key...



The thing I find sad is that there are people who realized that they are a mismatch for their position, not always because it is too high but sometimes because it is on the wrong track. Making the change is usually very difficult, as companies do not see "demotion by the demoted" as something normal.

A case I witnessed was a VP of Engineering who realized (after many years at that position, performing well) that he is more an engineer than a manager. His results were good but he did not like the work he was doing because his true talents were under-utilized.

He wanted to move to a position where he would have a more "technical" role, on the official "technical ladder" but it was very difficult for him to make his point. He liked the company very much though (in Europe).

I did not know what to really tell him at the time, I wish I had some good arguments for him to help his case.

One of the key things is that he did not want to change the company - he loved it, he loved what they did; he just took the wrong turn at some point.


One possible explanation for why the DKE is biased towards incompetence is any bias that derives from favorably redefining a problem. Over achievers don’t need to redefine a problem set to more narrowly qualify their performance.

For example an expert JavaScript developer can quickly produce an MVP with an original code solution. A less competent (confident) developer might also be able to produce a similar MVP but only in certain contexts, such as requiring a particular framework. The less competent developer may refer to themselves as an expert developer in React, or whatever other framework, but in doing so redefines the problem to fit a more narrow context not immediately aligned to the product.


What do you call it when you mistake "I know OF x" for "I understand X". I used to do it all the time; I think that because I have heard of something, I somehow know how it works or how to use it. For me, it's kind of like going "Oh, I've seen that meme before!" and somehow going meme++ in your head, so of course I am 'more familiar with it', but I am doing that for advanced topics I have no demonstrated practice in. I am much better now at realizing my incompetence, but I wish it could be studied in more detail, instead of people just looking at someone who is demonstrating confident incompetence and going "Dunning-Kruger?" "Dunning-Kruger!".


I think one of the key takeaways was that people are biased or wishful to think that Dunning Kruger effect is stronger than it really is because people want methods to shut down anyone who brags or thinks very highly of themselves


Also you cannot deduce from "person thinks they are skilled" to "person is not skilled".

But it is useful to think about this for our personal growth. There are many psychological traps and challenges that can slow down progress or even halt it.

Related:

https://daedtech.com/how-developers-stop-learning-rise-of-th...


Yeah, I really strongly agree with that point.

Every single time I see the Duning Kruger effect quoted it's used to dunk on some person perceived as incompetent (usually a highly-placed executive) as a way to say "No, look, all people in that position think they're geniuses but they're actually morons!"

But... well, even if there weren't a thousand different possible confounders (regression to the mean, of plain confusion about the level of competence of the other students taking the test), the DK effect isn't that strong.

People act like it's an uncanny valley of stupidity and hubris, when as presented it's more like perceived competence is only roughly correlated with actual competence, with minor bumps. And yet people keep invoking it as if it were the explanation behind all the incompetence in the world.


In setting out Dunning's & Kruger's own explanation of the data (incompetent people lack the skills they’d need in order to know they’re incompetent), the author asks, rhetorically, "If you don’t understand very many words in another language, how can you evaluate the size of your own vocabulary in relation to other people’s?" Well, as in my case, if you know many words in your native tongue that you cannot express in another language, surely that's a clue available to anyone in the same situation?


True, but it’s not perfect. Learners of English often don’t realize that they’re missing certain distinctions that their native language doesn’t make. Pig/pork is a common example. English has a lot of those.


Point taken. It has reminded me of a somewhat infamous phrasebook where, IIRC, the author simply applied a translation dictionary to phrases in his native tongue, without regard to these subtleties, or even of differences in grammar.



The most aggravating aspect of the Dunning-Kruger effect occurs in people who know about the Dunning-Kruger effect: often they believe there can be _no_ case where the bulk of a distribution is above average. The easy example is: The average number of legs on a human is "1.98", the bulk of the population is indeed above average.


That's why I say "above median." (50% is above median by definition).


Those who claim they understand the Dunning-Kruger effect are typically less competent than those who don’t.


That's way too meta for me.


"Citations of Dunning-Kruger are usually examples of it."


These days I associate “Dunning-Kruger” with “humblebrag”. I have similar feelings about Imposter Syndrome. I wish both terms had never been popularized.


I personally think the DKE is more likely to be found in knowledge based tasks requiring some level of gatekeeping. Like programming for example...

It has been quite a while now ~9 years, but I had the chance to work at a pretty prominent tech company for about 8 months when it was roughly 13 people in engineering. It was my first job experience in "tech" proper; and I think, I was most fortunate, to have had the opportunity. Immediately, and without question, I knew for sure I was absolutely the most junior person there. It's not because anyone was unkind, quite the opposite; everyone else was so unbelievably experienced it was obvious. I had, up until that point, not encountered such levels of professional experience, or expertise in a field, outside of university.

It gave me perspective for my (own lack of) ability I likely could not have gotten otherwise. It was humbling in a wonderful way because it meant I had, and have, a great deal to learn. I was happy to be resolutely at the bottom.

While undeniably good for my person, and who I want to be in life, it has also come to cause me a lot of pain.

Having worked pretty tirelessly since that original job, in the pursuit of learning and mastery of my craft. And if I am honest, to my chagrin, found I now underestimate my abilities. I only realized this because I generally give people the "benefit of the doubt" and it kept/keeps (still trying) biting me. It's very much insanity: doing the same thing again, knowing it won't work, expecting a different result... "'cuz someone else must be smarter than me." What I should learn to say instead of "smarter than me" is "less experienced and very confident."

Erik Dietrich wrote an article about the Dunning-Kruger where he coined the terms "Advanced Beginners" or "Expert Beginners." Like Dietrich, I tend to observe these folks, have the capacity, to cause the most harm for people in the competent/proficient areas (like I find myself). I believe they mean well, in most cases, but disregard anything not directly from an expert. The worst case being there is no expert available to help check them, and they will unrelentingly default to themselves. A close second, when they are the gatekeeper to the expert, and are unlikely or unable to explain someone else's ideas.

It's easy to see why it happens though. Unable to scope problems/tasks correctly; the problem's resolution becomes rooted in "beliefs," and generally speakings, people view other's beliefs as equivalent. With equivalent choices, we tend to then make decisions based on our preferences.

... but I digress, I've ranted and rambled enough.

[1] - Link to the referenced article: https://daedtech.com/how-developers-stop-learning-rise-of-th...


I think this article needs to be tagged that it's from 2010.


Added. Thanks!


Correlation does not imply causation. Regression to the mean and other statistical artifacts haven't, afaik, been proven to explain any part of the dunning kruger effect. This is, like the article, of course, speculation.


Yes they have. You get D-K exactly when you run the simulation of the data collection with ceiling/floors, and you have to for basic statistical reasons. There's nothing to explain. D-K does not exist, and OP is misleading in trying to 'correctly' explain research that was wrong to begin with (he gets only halfway there when he admits that regression to the mean forces much of D-K, but then handwaves furiously that it can't explain all of it, when the other measurement problems account for the asymmetry in how much the top/bottom groups regressed).


> when the other measurement problems account for the asymmetry

Such as?

And what about the 2nd and and 3rd quartiles?

And even if the phenomenon is caused by the mathematics of floor/ceiling effect (low skill people bias their assessment upward because they know they can't have a negative percentile), it's still real effect in misestimating their percentile.

Baseball players at bar can be modelled by random numbers too. That doesn't mean variance in performance over time is purely random.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: