The last thing I read about the link between amyloid-β accumulation and Alzheimer's was that the entire field was full of fake data ( https://www.science.org/content/blog-post/faked-beta-amyloid... ). In particular, even treatments that directly reduce amyloid-β in the brain did not restore cognitive abilities.
At least this paper tests both cognitive abilities as well as "amyloid-β pathologies." I'm not at all an expert in this field but gold nanoparticles sounds like something you'd see on a late night infomercial, lol.
In a massive field, one researcher does not constitute "full of faked data," despite how concerning it is.
The problem is viewing individual papers as the unit of truth in science. The "self-correcting" nature of science will actually reject entire papers, and entire directions of inquiry. Including, maybe, a casual relationship between beta amyloid and AD, but maybe not.
The other key part of science is holding everything in a state of uncertainty. There's some "facts" but mostly just hints and clues. And with Alzheimer's disease in particular we are trying to make progress with completely inadequate vision; we really can't even measure so much of what we want to measure. Feynman said it back in the 1960s, too, physicists have failed to deliver the tools to biologists to really measure what needs to be measured. There have been advancements, and DNA sequencing technology in the past decade has been turned into the most clever sorts of information theoretic microscopy by combining DNA sequences with many other biochemical processes. But we as a species still can not measure a lot of the things we'd like to measure.
I appreciate your commitment to modernist capital-S Science here :) I'm familiar with how the field ought to work but after working in Andrew Gelman's lab for some years, also with how it can fail us. Here I think the researcher in question has had a much larger impact than you are allowing for. Here's a choice quote:
> Every single disease-modifying trial of Alzheimer’s has failed.
> The huge majority of those have addressed the amyloid hypothesis, of course, from all sorts of angles. Even the truest believers are starting to wonder. Dennis Selkoe’s entire career has been devoted to the subject, and he’s quoted in the Science article as saying that if the trials that are already in progress also fail, then “the A-beta hypothesis is very much under duress”. Yep.
The hypothesis was under great duress even in 2004, when I took a protein structure course that spent a lot of time on prions and the beta amyloid. Many people have devoted their careers to chasing this down, only one as far as we know published impactful fake data.
However, the particular faked data, despite lots of citations, has apparently not lead to any clinical trials:
> Did the AB*56 Work Lead to Clinical Trials? That’s a question that many have been asking since this scandal broke a few days ago. And the answer is that no, I have been unable to find a clinical trial that specifically targeted the AB*56 oligomer itself (I’ll be glad to be corrected on this point, though).
I wish to retract this comment, as it was not based on full information. I was going off of the data fr the linked article, but here are many more cases of fraud from leaders in the field:
But he's correct, amyloid plaque theory was founded on bad data. Amyloid plaques as causal agents is unclear, but it was made to appear clear by poisoned data, and many studies conducted afterwards, in good faith, assumed that the information and conclusions were sound. However it doesn't appear to be the case, and instead something along the amyloid beta pathway is more likely to be the true causal factor, and plaques an association. It has spawned something of a wild goose chase in Alzheimer's research and treatment.
The faked data is not foundational to the field, if we are to believe the article linked from that comment.
> But my impression is that a lot of labs that were interested in the general idea of beta-amyloid oligomers just took the earlier papers as validation for that interest, and kept on doing their own research into the area without really jumping directly onto the 56 story itself. The bewildering nature of the amyloid-oligomer situation in live cells has given everyone plenty of opportunities for that! The expressions in the literature about the failure to find 56 (as in the Selkoe lab’s papers) did not de-validate the general idea for anyone - indeed, Selkoe’s lab has been working on amyloid oligomers the whole time and continues to do so. Just not Lesné’s oligomer.
And
> Did the 56 Work Lead to Clinical Trials? That’s a question that many have been asking since this scandal broke a few days ago. And the answer is that no, I have been unable to find a clinical trial that specifically targeted the AB56 oligomer itself (I’ll be glad to be corrected on this point, though).
It's hard to discern discourse in a field one is unfamiliar in, I'd tried with the Alzheimer's fiasco. Here's my tuppence:
The plaques are known to be linked to Alzheimer's, the debunking of one paper that messed with its figures does not detract from the whole body of research. The inefficacy of plaque-targeting treatments may not be proof that the plaques are not causal in nature, only that their damage is not reversible/fully understood.
That may or may not be the case, but you're rather detracting from the original comment, either deliberately or not.
The issue in the Alzheimer's world is the possibility that the very disease mechanism concept underlying the vast majority of research and interventional trials into which countless multiple billions have been poured, is incorrect.
Within that space, this is orders of magnitude more fundamental and serious than a flip aside that lots of trials have problems, so who cares about another?
> the very disease mechanism concept underlying the vast majority of research and interventional trials into which countless multiple billions have been poured, is incorrect.
Not "is incorrect," but might be incorrect. And we almost certainly won't know it is correct until we actually have a therapy.
Those pursuing cures could have waited until there was more solid science, but they and their funders took on the risk, knowing full well that the amyloid hypothesis is not proven.
This is not some indictment of science, this is normal risk taking for a problem that hugely affects society.
But the entire framing of the comment is that the amyloid hypothesis is taken as fact and not possibility, when in fact the core research question is whether it is true or not.
It is the best possible explanation so far, but four decades of research have not reached a definitive conclusion.
An open problem is not a problem for science, that's the fundamental focus of science. The problem is people misrepresenting what science is and what it aims to do.
(Late back to this but) It's far more skewed than you're making out. That the a-beta hypothesis is true, is/was the vastly dominant prevailing belief in the field, to the extent that it hasn't been a question that many 'experts' were willing to meaningfully address.
To be clear: for decades, researchers wishing to pursue lines of inquiry contrary to the a-beta hypothesis struggled for traction and funding, and saw their careers struggle as a result[0]. As such, trying to disprove the a-beta hypothesis was not the core research question for many/most, for long time.
(Edit: forgot to say, thanks for continuing the conversation, it is much appreciated! This comment may come across snippier than I intended, but please know I appreciate your effort here even though my experiences lead me to a different conclusion.) This article is just sensationalization of the standard scientific process. Grants do get awarded by friends and it does appear very much like a cabal. Or things like this:
> A top journal told one that it would not publish her paper because others hadn’t.
Oh the horror, not getting published in a top journal! Turns out that most good science gets published outside the top journals.
This sort of behavior is bad, and has always been part of the process, and may actually be better today than it was a century ago, as the clubs are not nearly so tight as they were back then.
Early in my career I remember reading some of ET Jaynes' (an early Bayesian reasoning guy) discussions of his early career, and how he had to very very carefully choose his topics so that he wouldn't upset the big personalities in physics and thereby have his entire career crushed. It's better these days than it was then!
There will be sour grapes about funding, just as there are when VCs all jump on the hype train for the same idea, but my only scientific exposure to the amyloid hypothesis for the past 20 years has been in terms of it being an unproven hypothesis. Starting down exploratory routes for explanatory hypotheses should have been pursued, and was pursued, and will continue to be pursued, but the question of "how much" is exceptionally difficult to answer.
Perhaps I'm biased from being in Science too long, but I've seen so many sensational Stat News article that never pan out when pushed upon. I wouldn't trust them at all with stuff like this.
>For more than 150 trials, Carlisle got access to anonymized individual participant data (IPD). By studying the IPD spreadsheets, he judged that 44% of these trials contained at least some flawed data: impossible statistics, incorrect calculations or duplicated numbers or figures, for instance. And in 26% of the papers had problems that were so widespread that the trial was impossible to trust, he judged — either because the authors were incompetent, or because they had faked the data.
Firstly, this is only from one journal, Anesthesiology. Second, the phrase "at least" indicates that while 44% had some amount of (presumably) flawed data, only 26% of the studies were bad enough to be judged fake or severely flawed by this one (admittedly esteemed) researcher in the field of anesthesiology. It's important to be skeptical and do your homework when you hear sweeping and/or shocking results. It's also important to read carefully, especially with science journalism because it is written for clicks and broad audiences, not to reduce ambiguity and adhere to strict standards of accuracy.
I didn’t go look up the quote but based on your version here it sounds like roughly half (44%) had some kind of suboptimality, and of those roughly a quarter (26%) had serious problems preventing them from being relied on.
That means 11% of the total papers should be discarded, which means 89% of the papers can be used.
As per usual, science reporting fails to use precise language. It can be interpreted either way, although I think your interpretation is the slightly larger leap based on phrasing. In any case, it is far below the 70% (and not directed broadly at all scientific research) that GP states.
> In particular, even treatments that directly reduce amyloid-β in the brain did not restore cognitive abilities.
Correct, but this doesn’t constitute “fake data”. It could be that amyloid-β is a marker rather than a causative factor. Or it could be that amyloid-β related damage is downstream, and remove amyloid-β after the damage has been done won’t remove the other damage.
It’s too quick to wave away an entire field because a single theory didn’t pan out. Most medical research proceeds with a lot of dead ends before it is figured out.
The current law is more general; it’s the current policy’s consumer price heuristic that has become a bad approximation to the law. I like “The Economists’ Hour” on the topic.
I will say up front that I don't think the social good is worth what we are collectively paying for it, but I do think the market hours are a reasonable device. This is basically because there are humans involved and they need to sleep (Matt Levine has written about this).
If you want the best price, you need to have all of the market participants bidding together. Market hours serve as a coordinated period in which ~all market participants agree to be online and bidding.
Prices, thus, get stale overnight. But we assume that that is mostly okay, as business is normally conducted during business hours, and we assume that transactions can wait until the next day. ACH transfers take multiple days! (technically so do stocks, but that's mostly invisible to retail traders).
If you're a retail trader, I would caution you somewhat against trading after-hours; there is very little liquidity and it could cost you 100s of bps more.
> Unfortunately, the rethinking package which is a major component of the book itself depends on the V8 engine for some reason.
This is my fault, in a sense. In order to get the new Stan compiler (written in OCaml) distributed via CRAN (which requires everything to be built from source on its antiquated build servers), we decided to use js_of_ocaml to translate the OCaml compiler into javascript. See this thread for more details: https://discourse.mc-stan.org/t/a-javascript-stanc3/11044
When I posted that, I didn't really think we would end up using it.
I'm surprised to see this argument. Tooling and infrastructure are only as clean as the services they support. I don't think you get to wash your hands because all you did was build e.g. Palantir a giant database that's great for storing locations if you know customers will be assassinating political dissidents with it.
I don't (or no longer) think it's great form to say negative things about former employers on the Internet, so allow me to expound on the positive aspects of my point.
I think having great tooling / infrastructure at least gives stakeholders more options in terms of the business direction they're taking. You can pivot and execute faster, which to me means you can pivot away from something ethically bad and execute in a different direction faster.
Great tooling / infrastructure in my mind is also ethically salvageable and redeemable. A great tool can help an ethically positive division of the company as it can help an ethically negative division of the company. It may not always be black and white.
Lastly, great tooling / infrastructure generally requires top talent, which can move anywhere and is sensitive to things like ethics. Having great tooling / infrastructure, or the threat of losing great tooling / infrastructure by losing talent to ethical issues, can act as pressure for management to choose certain projects over others. I think Grasshopper is an example of one such decision.
This did not hold up well, imo. Not sure how to count it but by some lists this is stuff like: Tencent, Alibaba, Amazon, Netflix, Priceline, Baidu, Salesforce.com, JD.com.
Others would include Uber/Lyft, Airbnb, GitHub, ...
There are only a few that would qualify that I can think of - Instagram, Snapchat, and WhatsApp.
If folks here have other examples or thoughts on why this does or doesn't hold true would love to hear them.
I think the "Thing" in Next Big Thing as this article uses it is rarely a business, more often a technology. WhatsApp has never been considered a toy, but IM as a medium has—WhatsApp's genesis as a valuable business is itself that toy becoming a Big Thing. Netflix was never a toy; streaming video was a toy in 2005, Netflix turned it into a billion dollar business.
Edit: Technology, not product, and Netflix example
Could just be where we are in the technology cycle. Using Carlota Perez terminology, in 2010 we were in the midst of Synergy for web technology and Frenzy for mobile. Now both web & mobile are nearing Maturity and whatever the next big technology cycle is still in Irruption.
If you looked at PCs from 1993-2003 you would've had a similar view. PCs from 1983-1993 underwent dramatic progress: you went from 16-color TV outputs, 64K of RAM, 8-bit CPUs, floppies, command-line interfaces, and BASIC to 24-bit color, 3D computer graphics, GUIs, 16MB of RAM, 200+ MB hard disks, 32-bit CPUs, IDEs, desktop publishing, CD-ROMs, modems and Internet access, even speech recognition and text-to-speech on some Apple machines. From 1993-2003, you had incremental progress: Microsoft won, Windows 3.1 became Win95 and then eventually Win2k, CPUs got faster, RAM and disks expanded, broadband happened, but what we used the computer for didn't change much, except for the advent of the Internet. The Internet itself was supposed to revolutionize computing, but the dot-com bust happened in 2001 and in 2003 it was still pretty much a toy. And other much-hyped developments like WebTV, VR, voice recognition, and AI had fallen flat.
There are plenty of toys that are still in Irruption now. Cryptocurrency was supposed to change the world; the bubble burst in 2018, but maybe we'll see it come back in 2020 with DeFi the way the Internet did in 2005 with social media. Drones are literal toys right now. So is VR & AR. There's been a lot of progress in computing for kids with things like Scratch, RoBlox, or Minecraft.
I see AR as having an enormous future impact, games only a small fraction of that. We jumped at the chance to escape reality into our phones, but AR is far more seductive. VR limits your mobility, but AR can be used every waking hour.
I see it starting with small tweaks to reality...a dingy concrete sidewalk replaced with a golden brick road. Empty walls in an apartment awash with art, scenic views, and/or entertainment.
Evan Speigel admittedly said this about Snapchat [0] (2014):
"When we first started working on Snapchat in 2011, it was just a toy. In many ways it still is – but to quote [Charles] Eames, 'Toys are not really as innocent as they look. Toys and games are preludes to serious ideas.' "
That doesn't appear to be true, Duryea and Benz vehicles (ICE burning petrol/gasoline in the late 1800s) appear to have replaced bicycles and horse-drawn carriages and acted as functional means of transportation, rather than "toys".
Earlier electric and hydrogen powered vehicles I've seen appear similarly to have been created as functional replacements for horse-drawn vehicles.
Maybe you could expand your comment to demonstrate your point?
I don't know about horse-drawn carriages, but bicycles of the late 1800s were luxuries for, well, not rich people but certainly for the well-to-do. It's one of the reasons that child-sized highwheel bicycles are rare: they were ridiculously expensive, and most certainly not a child's toy. And they did arguably serve a utilitarian purpose; at least a bicycle doesn't eat if you don't ride it. But how were bicycles, specifically highwheelers, portrayed by the press of the day? As ridiculous toys for wealthy people. Source: grew up around antique bicycle collectors, and have an 1886 Columbia Standard myself.
I remain unconvinced, however, that an early ICE-powered automobile was nothing but a temperamental toy that the owner really, really wanted to be practical. I've hung around with enough people with Ford Model Ts (my parents also had a Ford A for a while) to suspect that something built twenty years prior to the Ford T had to be laughably unreliable. Because I only rely on a T to get me to work if my boss were pretty laid back. :-)
Your first link is blocked for me by the website owner.
The second in turn links to https://www.cbc.ca/news/business/climate-change-will-push-ca... . When they say "cars are an expensive toy for the rich" they're saying figuratively that cars aren't reliable nor efficient. I don't think anyone would disagree that [important, useful] new technologies tend to get cheaper and more efficient over time.
It's really not clear, if this is the sort of backing to the "toy" claim what it is hoped to prove?
The early users appeared to use cars for transportation, which the second link doesn't contradict, so not literal toys. The "toy" claim is just a perjorative that the link appears, implicitly, to say was used by those who were already profiting from horse related industry.
The oft-printed statement that early automobiles were ‘playthings of the rich’ until Henry Ford’s super-cheap Model T started rolling off the assembly line in October 1908, is easily proven by scanning the makes of high-priced chariots and their wealthy owners who participated in the Amsterdam Evening Recorder’s Saturday, July 10, 1909 “Sociability Run” from Amsterdam to Lake Luzerne.
Everything I've ever seen indicates cars were considered to be "toys" for the rich when they first came out. Roads were not really designed for them. They were insanely expensive. They were not generally considered to be serious new tech that would eventually compete as transportation.
That changed when Henry Ford made cars affordable for the masses.
Old laws often said things like "You must have someone walk in front of your car ringing a bell so you don't spook the horses." This implicitly tells you that early cars were also extremely slow and horses were the form of serious transportation that the culture revolved around.
Even more so when post-WWII cheap petrol made cars affordable, and suburban living made them essential.
Half of all households (based on ~4 persons per household) didn't own an automobile until between 1945-1950. Ownership didn't cross the 10% threshold until the 1930s.
I think the "looking like a toy" terminology is confusing the discussion.
From the article:
> Disruptive technologies are dismissed as toys because when they are first launched they “undershoot” user needs.
So instead of asking if it "looks like a toy", what if we asked, which of those products started out by "undershooting" user's needs?
In that lens, I think you could make arguments for a few of those having started by "undershooting." The most obvious example to me is Amazon starting out as an online bookstore.
edit: I just want to add that from my vantage point this might be a true idea, but it gives very little actionable information. Perhaps Christensen's books give better insight on why this is something we should care about.
Bitcoin started off as a toy, with 10,000 being exchanged for a pizza once. Ethereum smart contracts, even ones dealing with cryto-asset-exchange, are often treated as toys
I don't remember anyone saying Uber/Lyft/Airbnb were toys when they began. Some might have thought them unlikely to succeed, but that's not the same thing as being a toy.
Exactly. Those of you that are hung up on toy meaning a literal toy or video game with no intrinsic value can replace the word toy with "gimmick" or "novelty" or even "a serious idea that won't scale" (http://paulgraham.com/ds.html).
Early Air Bed and Breakfast charging people for couch surfing and selling Obama O's (https://www.airbnb.com/obamaos) to fund their startup definitely feels toy like to me.
I'd argue there are entire categories that are currently considered toys, but eventually with the right business model, tech improvements and cultural changes will make the serious billion dollar tech companies we have now look like toys.
Toy vs Tool means novelty vs efficiency creator. A toy is fun to use it exists to be used for pleasure, a tool is useful and exists to make work easier. Toy should be used to mean superfluous bonus, unnecessary but enjoyable pastime.
Alberto has a much more 'traditional' song structure, with a specific melody/harmonies, and well defined sections. I feel like this Markov Chain process is best suited for more loosely structured ambient tracks. That said, I'd be curious to see the results!
At least this paper tests both cognitive abilities as well as "amyloid-β pathologies." I'm not at all an expert in this field but gold nanoparticles sounds like something you'd see on a late night infomercial, lol.