Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doing research has a lottery element. You explore something that might reveal an important discovery. And sometimes it just doesn't.

That doesn't mean you're a bad scientist, just an unlucky one. But it does mean you can't get tenure.

So it's easy to understand why people fake results to secure a career.



It is not all that difficult to select or find an interesting scientific questions for which a Yes, a No, or even a Maybe is publishable and interesting. Two on my highest impact studies have been fun “No” studies—-

1. No, there is minimal or no numerical matching between populations of neurons in retina (ganglion cells) and populations of principal neurons in their CNS target (the thalamus). That demolished the plausible/attractive numerical matching hypothesis. I was trying valiantly to support it ;-)

https://pubmed.ncbi.nlm.nih.gov/14657177/

2. No, there is no strong coupling of volumes of different brain regions due to “developmental constraints” in brain growth patterns. https://pubmed.ncbi.nlm.nih.gov/23011133/

That idea just struck me as silly from an evolutionary and comparative perspective. We were happy to call it into doubt.

I suspect many of the comments are being made by damn fine programmers who know right from wrong ;-) a la Dijkstra. But in biology and clinical research, defining right and wrong is an ill-defined problem with lots of barely tangible and invisible confounders.

We should still demand well designed, implemented, and analyzed experimental or observational data sets.

However, that alone is not nearly enough to ensure meaningful and generalizable results. The meta-analyses were supposed to help at this level for clinical trials but have been gamed by bad actors with career objective that don’t consider patient outcomes even a bit.

Highlighting the problem is a huge step forward and it looks like AI may provide some near-future help along with more complete data release requirements.

If you have done biology—- Hot. Wet. Mess. But beautiful.


> That doesn't mean you're a bad scientist, just an unlucky one. But it does mean you can't get tenure.

That sounds like a really easy problem to solve. Just treat valid science as important regardless of the results. The results shouldn't matter unless they've been replicated and verified anyway.


It's a really easy solution to describe.

Actually implementing it across the academic world seems much harder.


Too easy. Define "valid" though.


Valid as in meaningfully peer reviewed to avoid flawed/badly designed studies as well as total garbage (for example https://nerdist.com/article/fake-star-wars-midi-chlorian-pap...) but the gold standard should be replication.

We should reward quality work, not simply the number of research papers (since it's easy to churn out trash) or what the results are (because until they are verified they could be faked).


I wholeheartedly agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: