This screed is disappointing to me. We tell people to go out and do their own research. We tell people to avoid just buying into group think. We want people to be informed citizens who do research, share their research with others, and are open to revealed facts vs just buying into the most appealing emotion-laden rhetoric.
Some dude does a bunch of research, reasons as carefully as he can about the subject, and openly provides his research and reasoning to others. The very first note in his research is a disclaimer to say that he's a layperson with no medical or scientific credentials. It doesn't appear that he's trying to deceive anyone.
Why the piling on? Why the expenditure of so much effort to silence his intellectual inquiry?
> Why the piling on? Why the expenditure of so much effort to silence his intellectual inquiry?
Opposition is not always the same silencing, especially in this case.
Strenuously arguing against the rhetorical tactics and intellectual honesty of the presentation of ideas is, in fact, part of the free exchange of ideas.
This reminds me a little bit of the discussion about the effectivity of ppe/face masks, in which direction they are effective, and why in some countries. They are effective, if everyone wears them. It just so happened that there were not enough masks available in the countries which had the most heated discussions about this. You got yelled down for saying so. And then, about two months later, when they were available, wearing them was suddenly mandatory.
I'm not trying to "silence" anyone. The author should be free to write whatever he wants. I just think it's irresponsible and I'd like to encourage people with any influence or audience to do better.
EDIT: lots of criticism of me not mentioning that the author added a disclaimer later. It's a fair criticism, I've updated my article.
The fact that he changed that without taking into account any of the other flaws in his work suggests that he is not, actually, open to criticism of his work.
Here are some examples. He shows charts estimating per age range fatality rates and leaves out the most vulnerable group. He estimates an IFR and thinks that is comparable with a CFR. (They are not.) He is apparently unaware of the fact that IFRs have long been thought to be a bit under 1%. (For example https://www.imperial.ac.uk/media/imperial-college/medicine/s... in mid-March was the paper that convinced the UK to do a lockdown - it used an estimated IFR of 0.9%.) He does his own IFR calculation for the least vulnerable groups without looking at research showing the full IFR. He complains about hospitals being underloaded while refusing to acknowledge that hospitals would be overloaded without lockdowns. He fails to admit that it is extremely hard to limit transmissions within managed care facilities, and in an environment with lots of COVID-19 around you, personal distancing measures provide very little protection. (In other words we cannot simply "isolate the old people" and expect it to work.)
In other words he is ignoring every widely known fact that undermines his position. Which not coincidentally are the facts that lead everyone else to the opposite conclusion from his.
Given that he can't fail to have seen many points like these, his leaving them out of his analysis shows dishonesty on his part. And no, he is not attempting to correct this fault.
Unless I missed it on reading your article you knew about this when you wrote it and didn't mention it even in passing in your article. I only found out by spending 20-30 minutes reading the original and then comments on it. I basically lucked into that information.
I don't agree that we should open up right now but your representation of this article lost credibility when I found that one thing. I'd suggest you do as the author did and add your own disclaimer reference.
He is a popular public figure who gets sent crank work claiming to have solved NP=/=P several times a week. His context for saying that is a little different than here where the author is critiquing someone for creating what is essentially a short survey of several papers alongside his commentary. The original author isn't claiming to have proved anything, and even is asking for feedback and public contribution to his survey. It hardly struck me as cranklike.
That said, Tao is a good example of why gate-keeping is so dumb. If Tao wrote an article surveying and commentating on the current state of covid-19 epidemiological research as someone with no experience in the field, I'd trust it. I'd actually probably be more likely to trust it than any random credentialed epidemiologist doing the same work.
But doesn't the point still stand that the reason why Tao refuses random papers is due to bandwidth/capacity issues? The public faces the same problems, which is why credentials are generally more useful than the lack of it. One of the reasons this is even coming up was due to readers feeling that there was an attempt by the author to obscure over this issue of credibility.
And randomly selecting an expert from a field seems a bit off when there are institutions and professionals who have developed _reputation_?
> We can’t take a close look at all of that chatter before deciding.
Yes you can. For example, if there's a lot of chatter in favor of reopening, you could find a thorough summary of arguments in favor of reopening - much like the article you're complaining about! - and do your best to evaluate it on merits.
> I don’t know if he’s evaluating the data correctly, or if he’s ignoring other relevant data that doesn’t support his conclusion.
Then you'll have the same problem with writings by experts. They have written plenty of articles with basic mistakes, see for example the Santa Clara serology study [1] and the takedown by Gelman [2]. Knowing a bit of math, I rechecked the calculations and figured out that Gelman is correct, even though he isn't an epidemiologist. If you use "context" and "motivations" and "heuristics" as excuses to avoid evaluating the argument on merits, maybe you should rethink what you're doing.
> [...] excuses to avoid evaluating the argument on merits, maybe you should rethink what you're doing.
I think his argument still holds: experts having made mistakes doesn't change the fact that you are unlikely to be able to evaluate based on merits in the first place. Unless you are an expert in the field, you're highly unlikely to know enough about the other research out there which is valid, applicable and could contradict the findings; you're not capable of proper evaluation.
> Yes you can. For example, if there's a lot of chatter in favor of reopening, you could find a thorough summary of arguments in favor of reopening - much like the article you're complaining about! - and do your best to evaluate it on merits.
> The author even wrote the whole thing using “we” instead of “I”, which subtly gives the impression that a group of people collaborated to write this
Note, this is standard in the academic world, even if there is a single author. But it is a further way in which a phoney might mislead readers into thinking it is a credible academic paper.
> Note, this is standard in the academic world, even if there is a single author.
This is true. It's awful writing. And there are many other indications of terrible writing in academic papers as well.
- The Royal We. We are not amused by the use of the Royal We in academic writing. Like much of the bad writing in academics, this probably was borrowed from bureaucratic style intended to diffuse and deflect. The right way to write a paper as a single author is to be honest to the reader. You are a single person: use "I".
- Official style. Reams of tirades have been written about passive voice, avoidance of responsibility ("experiments were done" instead of "I did the following experiments"), and longwinded gobbledygook.
- Weird use of tense. Academic authors jump between past, past perfect, and present tense with no justification, sometimes in the same sentence, particularly when moving from experiments to results to discussion.
- Backwards construction. This is particularly true with formalists. It's traditional in mathematics to start with fundamentals and slowly build towards your conclusion. But these same authors, when writing empirical or position papers, don't realize that this is awful construction in actual text.
- Horrible titles with little explanatory value. "On Foo" means "I wish to intimidate you with my knowledge on Foo". Words like "on", "towards", "an understanding", etc., should be banned entirely.
Academic authors don't know how to tell their story, how to make strong and concise arguments, or how to be convincing. Worse still, many think this is "how you do it". Why? Because their advisors, who were also awful writers, did it that way. Few of these people have ever been trained to be good writers. It's a self-sustaining cycle of mediocrity.
The point from this post that really resonated for me was pointing out that the author if the original paper, even while claiming that arguments should be based on merit and not on expertise or credentials, went out of their way to project that they had that expertise and those credentials. Its disingenuous.
If you'd like to submit a PR to the original paper you can totally do that and it'll be open for everyone to see. I may not agree with the author but I don't think attacking him is fair (on these points). Based on reading his comments he seems open to discourse and suggestions on his paper.
So he didn't think he needed a disclaimer, had a discussion around it, then added a disclaimer. That's exactly what I want from another human being. Someone who is open minded enough to have a discussion, opinionated enough to back up what they think, but in the end plyable enough to make an important change when they see it.
I might suggest you edit your post to include a nod to his disclaimer, as you've shown now that you know about it but don't mention it in your article.
I didn't include mention of the disclaimer because it wasn't in place when I saw the article in question, and I didn't link to the article because I don't think it deserves any more visibility.
Also, just looking at the comments on the original article, I don't think the author is open-minded or pliable, but that's just my .02.
Is it possible that it still counts as "putting on airs" even when done by academics?
I had a long stint in academics as both a student and as research staff, and I noticed many instances of unchallenged self-aggrandizement within that community. I could easily believe this to be another instance of that, even if institutionalized.
Edit: FWIW I haven't looked into this history of this particular topic. There may a respectable explanation and I'm just ignorant.
In general this is done out of an appreciation that no work exists in a vacuum. "We", takes away from the individual and attributes the work to a group of people, even if they are not coauthors. I actually think this is one of the better conventions in academia.
Well it is also worth mentioning, the "we" can be used as a way to give a paper a conversational tone whilst still remaining formal. The "we" in that kind of case is the other domain experts reading the paper the author is assuming they are speaking to.
There are some legitimate complaints here, but the point about the paper being well-written seems weird. Would the author prefer the discussion of such an important topic be poorly organized and perfunctory? Also, using "we" is standard scientific practice, not some bluff.
I think they bring that up as one of the many ways the original author might disguise their amateurism. It's definitely not a bad thing to write "well", but faking an authoritative voice is a common problem.
The use of "we" is not "faking an authoritative voice," it's just a standard feature of a certain kind of formal writing. I'd be happy to pillory the paper for being bad, but let's not fault the author for at least conforming to standard conventions and making it easy to read and dissect.
Waggoner's article is actually more bizarre to me on the second read. The claim does not appear to be that the paper bad, just that interested, non-credentialed amateurs don't get to have (public) opinions, no matter how well-reasoned. While I'm highly skeptical of armchair epistemology a priori, there are plenty of criticisms of the (un)linked article that are more interesting and educational than pointing out it's written by someone without a PhD in epistemology.
By problem, you mean a requirement for proper form in an academic or professional paper. This was drilled into me time and time again in college, you 'have' to write in such a manner.
> The tone was authoritative and confident, the author didn’t prominently disclose their lack of expertise
If the author had said "I don't know what I'm talking about, but here's what I reckon" people would have spent a lot less time reading and debunking it.
The point of TFA is that reading and debunking is a precious resource, therefore it is disingenuous to present your work in a way that encourages more reading and debunking that it actually deserves.
It can be well written and organized without coming off as authoritative and putting on the airs of academic credentials. At the very least, they could say upfront that they are not an economist and have done the best they can in interpreting data etc, but acknowledging that there might be technical errors or misunderstandings that come from not having worked in the field.
With one click on "About" you can find out the author is "a software / site reliability engineer based out of Santa Barbara, California" and does not list any academic credentials. This seems extremely transparent to me.
Asking to click through to other pages to see is not transparent. Transparent is at the top of the document before you read it, knowing that most people will not do the extra click through and will rather just read the summary and move on with their day. It's hard enough to get people to click through pages on a website when you want them to.
> Also, using "we" is standard scientific practice, not some bluff.
No. The standard practice is to use passive forms.
EDIT: I've been informed by the people responding that passive form is now considered archaic, so apparently using first pronouns is now acceptable. This must've changed since I did my research.
> Nature journals prefer authors to write in the active voice ("we performed the experiment...") as experience has shown that readers find concepts and results to be conveyed more clearly if written directly.
I took a look at a few random recent papers at arxiv.org, in physics, biology, math, and economics. They all used "we".
Wikipedia says it is discourage in the social sciences, though, because "it fails to distinguish between sole authorship and co-authorship" [1].
I'm curious now why it is so important to distinguish between sole authorship and co-authorship via writing style. If you are reading the paper itself, a glance at the author list will tell you that.
This is.. a bit historical. While the passive voice was recommended in the past, most journals and advisors recommend using active voice in papers now.
I think Ryan is mostly right. We can't give everyone a shot at everything, there simply is no way to evaluate every claim from everyone. Imagine if we had to read papers proposing to inject people with disinfectant.
Of course that does leave is with the authority of guilds in each area. They get to define who is a member and who is not, and where the research money goes within each area.
The major problems I find with this approach is that we end up letting the experts decide things that they are not experts in, and we have to trust their own policing.
They cannot within their own guild decide what tradeoffs society should make, it necessarily requires more than one expert opinion, and there's no way to say the concerns of one group must outweigh that of another. It's always a political decision by someone who isn't an expert in every field.
As for their own policing, there are plenty of examples of the experts being wrong. How a non-expert is gonna fix that I don't know, but not having external oversight is a due diligence failure.
It's not a real problem though. We can give everyone a shot at everything simply because very few will take it and even fewer will be able to reach the point where their claims have to be evaluated by others or someone in a position of decision making. Having authorities on subjects can't do anything good, it only allows to silence and ignore uncomfortable questions and opinions.
The problem is essentially spam filtering with much higher stakes. A priori the source of an unusual paper might be one of:
- a genuine innovation from a polymath outside the field or undiscovered talent (Ramanujan)
- someone outside the field applying a well-understood technique from their own field in a new area (used to be common in bioinformatics back when it was done with Perl)
- someone in a non-first-world country much closer to the problem or with a connection to traditional relevant knowledge
Then there are the ones which turn out to be wrong:
- respected but crank-ish behaviour within the field: someone well respected who is extremely enthsiastic about an idea beyond all evidence, such as Linus Pauling's enthusiasm for vitamin C
- respected but ideological behaviour within the field: e.g. the warring schools of economics
- novices who are bad at checking their work: students who believe they've solved a famous conjecture but left out a minus sign on page 65. Most senior academics deal with a lot of these routinely.
- field outsiders getting cranky in another field: William Shockley's opinions on biology
- freelance not-for-profit cranks: outsiders who are wrong, but simply because of error and not malice
- for-profit cranks: this is where it starts getting genuinely dangerous, as these people can be high-output and are aimed at the public. All manner of quacks are in this category, such as the "miracle mineral solution" people who have been trying to get people to inject bleach.
- culture war cranks: The Alex Jones and David Ikes of the world. Even more dangerous as they are not afraid to libel people and destroy those who cross them.
It is no more possible to submit every paper you see to rigorous review and replication than it is to do your spam filtering by sending money to every Nigerian prince who asks for it and seeing which ones send it back. They will destroy you because their capacity to waste your time and effort exceeds yours. You have to go Bayesian; look for red flags that indicate that it falls into one of those categories above.
Another thing that really matters - Peer review from other experts in the field. We have mechanisms whose job it is to do what this article was talking about, see if the claims made add up/if the references are interpreted correctly etc. It's obviously not perfect, but it's a hell of a lot better than no peer review
If you only permit established domain experts to voice ideas you will severely limit potential advancements. Innovation is often the product of multiple disciplines intersecting to produce something novel.
Do you have an example from the last, say 20 years, where an actual amateur made any significant scientific contribution to an established field? I have strong doubts such a person exists. Simply because science, at the very least, requires you to be aware of what has been done already. Of course there are collaborations between researchers but this is a totally different point.
Public health policy in the midst of a global pandemic strikes me as the wrong place for innovation by amateurs with zero experience in any relevant fields.
Interesting - not quite what I expected from the article.
I think that lines like "So, we feel that we must say that the right to freedom of assembly (along with all of our other rights) are not luxuries that are graciously extended to us by the ruling class" and statements about the "WHO’s pattern of lying" make clear that this is a political opinion piece not a scientific paper.
Are you sure?! That is not a 'research paper'. There are no claims to results, no literature review, no methods, no abstract, no conclusions and no references. It is at best a review, and not an academic one.
I'm not trying to disparage the author. He's put a decent chunk of work into wrapping his head around the situation and shared with the world. More power to him. It's just that the output doesn't read like a research paper, or an academic review.
This is just a hollow pro-credential posturing without any substance to it. What is the point here supposed to even be? That software engineers too dumb to understand statistics and probability, and should just leave this to the 'experts' like the ones the original author was citing in order to make his argument?
I don't understand the critique of tone in this either. Is the idea here supposed to be "I disagree with what the author was saying, so he shouldn't have written it well with a concise tone" ? That is what this comes off sounding like the author here is saying. Like he thinks if your idea is wrong, you aren't allowed to write well or something.
Maybe I'm missing something, but this person doesn't appear to be an epistemologist or even any kind of trained philosopher. I'm not sure why he thinks that any of us should listen to the advice about who we should listen to the advice about covid-19 from someone who isn't an expert in a relevant field.
>the author linked to many research papers by experts
>wove an argument together out of all that research
>The tone was authoritative and confident
>it’s fairly long and comprehensive
>it’s well-organized
War is Peace, Freedom is Slavery, and Ignorance is Strength. The paper is bad because it is good.
The author of the original paper was more thoughtful than this one's. A good post would have investigated the original paper's claims and responded to them. This is a low-effort take.
"For example, when evaluating claims in a high-stakes, high-uncertainty, high-complexity area, like the public policy approach to COVID-19, a good heuristic might be “does this person have expertise or experience in a relevant field?"
Ironically, the author of this blog post doesn't exhibit expertise or experience in any relevant field.
Not at all equivalent. Einstein had a relevant degree -- albeit "only" a teaching degree -- and his annus mirabilus papers were not self-published; they were submitted to and accepted by a respected scientific journal.
This post is essentially saying don't even try to comment in an intelligent formal tone, citing sources, etc, on things you don't have a credential for. A surprisingly large number of fields are just applied statistics/probability with some minor domain knowledge tossed in. So people who are familiar with statistical methods can comment on things like this very fast. Further, that guy was explicitly citing people from the field to make his argument, not making some outlandish claim without backing it up.
In this post there is literally nothing about the actual content of what the other person was saying. There is nothing here about what citations they maybe were using incorrectly or misunderstanding.
There is nothing here saying what statistical tools they might have been accidentally abusing. There is nothing here! I'll speak for myself and say I'll judge ideas on their own merits not some empty credentialist garbage.
Sounds like we need better heuristics. It doesn't make sense to ignore research papers based on their origin.
Each paper should be evaluated on its merits. A system with submission, voting, and comments (HN/reddit) is a good way to filter the wheat from the chaff. It's hard to believe this basic technology hasn't reached the scientific fields.
They have. It's called "peer review". It is just that peer review is a slow process, so it doesn't really work in a fast-moving crisis environment.
Part of peer review being a slow process may be fixable (although it is difficult to compel unpaid, volunteering reviewers to adhere to strict deadlines), but part of peer review being a slow process is specifically that if you do not want to rely on heuristics such as author credentials you need to take the time to really analyze a paper to avoid falling into the traps mentioned in the original article (confirmation bias, being misled by an authoritative tone etc.). And HN/Reddit votes IMHO are definitely not a good example of avoiding any of these.
Voting doesn't ensure that the most accurate comment/submission will show up on top, instead a wrong but popular claim (as "many people agree with it") can easily get to the top.
The post makes a number of claims that I disagree with generally, like the idea that a person who is not a formal expert on a subject has no business offering evidence-based opinions on the subject.
I think a link to the original paper would help me better judge where the author is coming from and whether, in context, he/she is making good points here.
I am not going to defend that particular guy. I am surely not going to take this kind of advice from generic Joe Doe.
But here is the example that comes out of Chief Public Health Officer of Canada: "Canada's top doctor told CBC News the federal government could have made earlier efforts to keep the COVID-19 pandemic from sweeping across the country — but moves to close borders and screen travellers for the illness sooner might not have made much of a difference."
This might not have made much of a difference really got me. What if it did make effin' difference. When this kind of argument is presented by top health official with top credential I do not even know what to think. Ok the CBC could've misquoted/misrepresented her point of view but then where is the correction?
I've noticed the appeals to authority have ramped up since the authorities have been wrong over and over, again and again. At what point do we just stop calling them authorities?
At this point, the software engineer is more believable because he has fewer reasons to lie to you. The "expert" is going to be held up and crucified (in the style of this blog post) if he goes against the accepted narrative. And that's if he's not bought off right at the start.
This isn't the software engineer's credibility problem. It's yours.
Note that the reply was itself written by a software engineer as well. There is no particular reason to believe one over the other.
As for "authorities", I distinguish between people who try to understand and think well, versus those who have been recognized by bureaucracies and politicians. The first group has done astoundingly well. The second group has failed over and over again.
Sadly it is the second group that is in charge. And the result has been repeated institutional failures. :-(
At this point, the software engineer is more believable because he has fewer reasons to lie to you
I think this is false. Said software developer can have whole shebang of personal reasons to have his/her opinion on a subject be swayed either way. As a matter of fact your average software developer is often super opinionated about things. How can this not propagate to other areas I have no idea.
Can't help but notice the author's title, No, I won't read your amateur COVID-19 "research paper", which suggests it should be about COVID-19, immediately begins by criticizing an unlinked "paper" that seems to be about public policy impact from government actions.
,,the author is a software engineer, not economist or epidemologist''
Great! I have tried economist math test for university students, and it was a joke. I wouldn't trust any economist being able to analyze data, as they don't have the scientific background. I would invite any economist to try to solve my computer science calculus university test in 45 minutes.
Some dude does a bunch of research, reasons as carefully as he can about the subject, and openly provides his research and reasoning to others. The very first note in his research is a disclaimer to say that he's a layperson with no medical or scientific credentials. It doesn't appear that he's trying to deceive anyone.
Why the piling on? Why the expenditure of so much effort to silence his intellectual inquiry?