Sounds like an amazing guy. Bell labs had such engineering talent. I’m somewhat against breaking up Google (Alphabet) because it might be the the only company left with vaguely similar culture, capability and capital.
True. 56K modem inventor Brent Townshend went to work at Bell Labs (87-90) straight from Uni and undoubtedly access to state of the art DSP technology so early was a key factor.
I don’t think this is a science or safety issue, it’s an issue with bad ingredient labeling. They should name these numbered dyes something more understandable. “Red dye 4” sounds pretty sketchy when they could say “Cochineal extract for coloring”. People can reject the product because the ingredients include a bug derived coloring rather than fear of the unknown “red dye” invented by their imagined evil food scientists.
When I was a kid I ate lunch with a girl who couldn’t have M&Ms because she was allergic to the red die. I was appalled by this.
And the strangest thing about that story is that she was maybe 4 years old when Mars pulled the red M&Ms due to a cancer scare with a different red food coloring. Though my recollection was that it was a few years more recent than that, given how shelf life and supply chains work, I may have been getting back stock. I think I eventually proved to her that there were no red M&Ms anymore. I guess her parents hadn’t bothered to check for years. Not the first injustice I had tried to right but the easiest one.
Five years later they added Red back and I would think of her every time I ate M&Ms for a long time after.
Wikipedia reports red M&Ms were eliminated in 1976, and added back in 1987. I'm sure it took several months for these changes to make it to the marketplace, but probably not years; M&Ms have a reasonable shelf life, but they do degrade, so year old stock isn't great.
My Mom was pretty salty about the red M&Ms going missing, and refused to eat blue M&Ms for quite some time.
As I said, I’m pretty sure that number is wrong by a little. And does 76 mean January or December? That’s a long time for small children. This thread is about people announcing things and then taking a while to do them. But we were in a town hit particularly hard by a recession. I have no doubt they were turning inventory closer to the Best By date than yours was. I bet all those sales I remembered seeing (3 for a dollar!) were overstock.
The only reason they add dyes, outside of baked goods IMO, is because they've used so many artificial ingredients, fillers, and preservatives that the resulting food product no longer looks appetizing. Whole, fresh food has never needed dyes added to it to be enticing to our monkey brains.
Carmine is better known as Red 4 these days. Doesn't have much taste. Saffron adds basically no taste in the amounts typically used for something like Saffron rice. Squid ink again, mostly for the striking color. The taste isn't particularly great.
Turmeric can go both ways, but the ground turmeric that's historically common for preservation reasons is much less flavorful than the fresh root. It's mostly a color thing.
Of course, we can also just open up a medieval cookbook to see what they say. The Forme of Cury is a nice 14th century example that's available from Gutenberg:
https://www.gutenberg.org/ebooks/8102
As to colours, which perhaps would chiefly take place in suttleties, blood boiled and fried was used for dying black. saffron for yellow, and sanders for red. Alkenet is also used for colouring, and mulberries; amydon makes white; and turnesole [for yellow]
Alkanet is commonly used today for Rogan josh, but historically would have been more known for rouge and dying wine. A Mediterranean cookbook might have instead chosen amaranth for the same purpose
You need so little tumeric or annatto to color food and they impart so little flavor in so many applications that the reason they are used is very obvious.
Fruits and vegetables from a few hundred years ago would be almost unrecognizable and unpalatable to modern consumers. The colorful, delicious, and durable fruits and vegetables of today are the result of lots of work and selective breeding.
Most fruits and vegetables in grocery stores taste pretty bland. They're bred more for appearance, shelf stability, regularity, and transport rather than taste.
There are legendary varieties that are lost to time. Occasionally we rediscover them, and we get to compare. Usually the modern industrial varieties are pale imitations.
Although you're free to like/dislike whatever you want, calling other people's food vile seems kind of mean. There are plenty of old/ancient foods that look exactly the same as jell-o
warabi-mochi, nata-de-coco, aiyu jelly, kokum, annindofu, kanten, blancmange, to name just a few
Jell-o is a byproduct of the meat industry, none of what you listed seems as bad nor look as bad as a mass produced artificially colored and artificially flavored blob of pork gelatin seating in a plastic cup.
If anything I'd say my take is less insulting to these other dishes than you are by comparing them to jell-o
Wild salmon have their characteristic color because they are eating organisms that contain the naturally occurring Astaxanthin. Farmed salmon subsist on grains, fish oils, etc and come out looking grey unless pigments are added to their feed.
interesting, the Salmon I have caught has not been as colorful- I mostly fish in the rivers though
edit: it looks like that vid had some steelhead (trout) mixed in? This is more like what I have seen, but the color is even more "dulled" in person https://www.youtube.com/watch?v=e09UmeqAd4g
Because being unhealthy is the natural state of things, and keeping a handle on that fact, at scale, is difficult and complicated. We used to do a much worse job of it, though. Humans living in developed economies where everyone eats all these oft-maligned foods live much longer than their ancestors did a few centuries ago. And those who live into old age tend to remain healthier longer than those who did a few centuries ago.
That's to say that there isn't room for improvement, or that there aren't things in our food supply that don't belong there. But a sense of perspective is important. "Is this food coloring increasing people's lifetime risk of a specific cancer from 0.005% to 0.01%?" is still a pretty tidy improvement over, "Ugh, yet another outbreak of ergotism. Well, why don't we try burning witches to see if that puts it to a stop."
One of the things they have that people in developed economies generally don't is a 50% infant mortality rate.
The ones that don't achieve it through access to very unnatural artifacts such as vaccines that are quite likely to have been made using ultramodern technologies such as genetic modification.
Or, I've got quite a few friends who have various congenital conditions that mean that they absolutely would not have survived in a society with a more "natural" foodway. With the modern food supply chain, though, they're doing just fine. Unnatural things you get in some ultraprocessed foods, such as vitamin fortification, mean they can even do it without having to worry about developing comorbid chronic ailments due to malnutrition.
That is a survival bias. Ironically if you want signs of good health practice look for unhealthy people - it means that they can survive vs the unhealthy just dying.
A really good example of this was the paper that kicked off the whole "omega-3 fatty acids for heart health" thing. It ultimately got retracted.
The gist of the paper was that they observed that Inuit communities have really low rates of heart disease, and hypothesized that it could be because their traditional diet is very high in omega-3 fatty acids. The problem is, they don't actually have low rates of heart disease. They just have low rates of heart disease diagnosis, because they also have limited access to health care.
Little s Science can’t get “corrupted” because it is just a tool. When the scientific method is used to determine what people prefer to buy based on one second of looking at the product, that is arguably an immoral use of the scientific method especially if the health of the users is not taken into account.
That’s also to say that “trust the science“ can be a dangerous way to shut down discussion when people are actually grasping for words to understand whether a scientific method is being improperly used.
Are people so unhealthy? Life expectancies continue to rise. The "a majority of americans have a chronic health problem" stats include things like back pain. It turns out that if you live a long time you get chronic health problems.
There's doubt about this. While high sugar and low fiber is problematic, sheer quantity might be a bigger culprit. And some indigenous populations seem to remain relatively healthy on low-fiber diets (i.e. eating mostly animal products).
My off the cuff opinion, find a way to verify the people on the site are not bots and actually real. This is a hard problem with some expensive solutions, privacy implications, significant trade offs and real cost, however it may be worth trying to address. Imagine twitter (X) or facebook without the noise.
It's basically impossible. The sites that manage to be strict enough just end up with bots which are real phones in a building somewhere.
You just have to design so the bot's aren't relevant. The problem with Twitter, Facebook, and friends, is that they push the bot content at you, even if you don't follow them.
The reason people suggested that for emails, is because you have to send a large number of times to spam. With social media, that's not true. A single post can be viewed a large number of times.
Use quadratic post fees. The larger the number of people who try to view a post, the larger the fee becomes or the post stops being viewable.
So the fee isn't a one-time fee, but an "open" fee that keeps increasing.
Another mechanism is to require a fee to view. If after viewing the content the user deems the content isn't valuable, they flag it in some way or they don't bookmark it, and this indicates the poster should pay the display fee.
I still don't think it would work now though. People don't trust social networks like they trusted Facebook in the olden days, and they never will again.
Is there any path forward to fixing the current reproducibility crisis in science? Individuals can do better, but that won't solve a problem at this scale. Could we make systemic changes to how papers are validated and approved for publication in major journals?
Pre-registration is a pretty big one: essential you outline your research plan (what you’re looking for, how you will analyze the data, what bars you are setting for significance, etc.) before you do any research. You plan is reviewed and accepted (or denied), often by both funding agency and journal you want to submit to, before they know the results.
Then you perform the experiment exactly* how you said you would based on the pre-registration, and you get to publish your results whether they are positive or negative.
* Changes are allowed, but must be explicitly called out and a valid reason given.
A huge benefit of that is that it would force the publication of null results, although there would still be no incentive for others to cite null result publications, and citations are unfortunately the main metric that determines the "value" of a scientist. Is there a way to make null result publications valuable?
Also, forcing pre-registration on everyone would be problematic because some types of research are not well-suited to strict planning and committee approval -- how would you quickly make adjustments to an experiment? how would you do exploratory data analysis? serendipitous discoveries would be suppressed? etc.
You can still do exploratory data analysis, you just have to say that’s what you’re doing ahead of time.
This way, when you find one significant correlation after testing ten thousand options, you have to say you tried all ten thousand, rather than make it sounds like you only tested the one.
What is boring about this? You get the guarantee of publishing your work, even if you get a negative result.
I take it you don’t do research. Cause boring is nothing compared to wasting month of time and money only to get a negative result that nobody will publish.
I'm not in the academy, but I do R&D, I published several times, and that's not how I work at all.
I have a broad and open-ended focus, I work as usual on the things I find interesting, then sometimes I see a thing that looks interesting and decide to investigate, then sometimes my initial tests give good results, but more often then they don't, but they give me an idea to do something completely different, and some iterations later I have a result.
I imagine that depends on a field of research. IT is cheap, but I imagine a physicist who wants to do an experiment must secure a funding first, because otherwise it's impossible to do anything. And it requires one to commit to a single topic of research.
That part is true in all fields. And one of the things that pre-registration enables is the publishing of those negative results.
Otherwise, once you're done the research and got the negative result nobody wants to publish it (unless it’s very flashy). Without being able to publish negative results, and therefore read about them, each researcher must conduct an experiment already known, if only in private, to not work.
So he has to take time to write up those experiments he was doing as exploration, and nobody will read them because who wants to spend time reading about failed experiments.
People will still want to do their own exploring to get a feel for a problem.
From the perspective of a dishonest researcher, what are the compliance barriers to secretly doing the research work, and only after that doing the pre-registration?
Disclosure: I'm a scientist, specializing in scientific measurement equipment, so of course reproducibility is my livelihood.
But at the same time, I doubt that fields like physics and chemistry had better practices in, say, the 19th century. It would be interesting to conduct a reproducibility project on the empirical studies supporting electromagnetism or thermodynamics. There were probably a lot of crap papers!
Those fields had a backup, which was that studies and theories were interconnected, so that they tended to cross-validate one another. This also meant that individual studies were hot-pluggable. One of them could fail replication and the whole edifice wouldn't suddenly collapse.
My graduate thesis project was never replicated. For one thing, the equipment that I used had been discontinued before I finished, and cost about a million bucks in today's dollars. On the other hand, two labs built similar experiments that were considerably better, made my results obsolete, and enabled further progress. That was a much better use of resources.
I think fixing replication will have to involve fixing more than replication, but thinking about how science progresses as a whole.
Reproducibility studies are costly in time, reagents, and possibly irreplaceable primary samples. I usually would prefer a different study looking at similar mechanisms using different methods than a reproduction of the original methods, although there’s an important place for direct replication studies like this as well. We can also benefit from data sleuths uncovering fraud, better whistleblower systems, and more ability for graduate students to transfer out of toxic labs and into better ones with their funding, reputation and research progress intact.
Scientists have informal trust networks that I’d like to see made explicit. For example, I’d like to see a social media network for scientists where they can PRIVATELY specify trust levels in each other and in specific papers, and subscribe to each others’ trust networks, to get an aggregated private view of how their personal trusted community views specific labs and papers.
> Scientists have informal trust networks that I’d like to see made explicit. For example, I’d like to see a social media network for scientists where they can PRIVATELY specify trust levels in each other and in specific papers, and subscribe to each others’ trust networks, to get an aggregated private view of how their personal trusted community views specific labs and papers.
That sounds fascinating, but I'd have a darned high bar to participate to make sure I wasn't inadertently disclosing my very personal trust settings. Past experiences with intentional or unintentional data deanonymization (or just insufficient anonymization) makes me very wary of such claims.
A dream of mine was that in order to get a PhD, you would not have to publish original research, but instead you would have to _reproduce existing research_. This would bring the PhD student to the state of the art in a different way, and it would create a natural replication process for current research. Your thesis would be about your replication efforts, what was reproducible and what was not, etc.
And then, once you got your PhD, only then you would be expected to publish new, original research.
That used to be the function of undergraduate and Masters theses at the Ivy League universities. "For the undergraduate thesis, fix someone else's mistake. For the Master's thesis, find someone else's mistake. For the PhD thesis, make your own mistake."
Yes, but nobody wants to acknowledge the elephant in the room. Once again, this is why defunding research has gained merit. If more than half of new research is fake, don't protest when plugs are being pulled; You're protesting empirical results.
Science (including all the fake stuff) advanced humanity immensely. I can not imaging that cutting research founding to do less science (with the same percentage of fake) is helpful in any way.
You committed the same sin you are attempting to condemn, while sophomorically claiming it is obvious this sin deserves an intellectual death penalty.
It made me smile. :) Being human is hard!
Now I'm curious, will you acknowledge the elephant in this room? It's hard to, I know, but I have a strong feeling you have a commitment to honesty even if it's hard to always enact all the time. (i.e. being a human is hard :) )
I had always envisioned an institute for reproducibility & Peer review. It would be a federally funded institute that would require Phd candidate participation as an additional requirement to receive your degree. Really it wouldn't be a single place but office or team at each university where proper equipment was available and perhaps similar conditions for reproducing specific research. Of course the feasibility of this is pretty low.
There is an huge amount of pressure to publish publish publish.
So, many researchers prefeer to write very simple things that are probably true or applicative work, which is kind of useful, or publish false/fake results.
May be try to define a "reproducible" h-index, ie your publication doesn't count or count less until a different team has reproduced your results, the team doing the reproducing work gets some points to.
(And may be add more points if in order to reproduce you didn't have to ask plenty of questions to the original team, ie the original paper didn't omit essential information)
I'm curious, I don't get why the down votes? Having to race for publishing pushes people to cheat, It didn't occur to me that it was a bad point, but if you have a different opinion I would gladly hear!
Because a great many who comment on this site are infantile but self-congratulating idiots who just can't help themselves on downvoting anything that doesn't fit their pet dislikes. That button should be removed or at least made not to grey-out text.
Yeah "individuals do better" is never the answer -- you've got to structure incentives, of course.
I don't think you want to slow down publication (and probably peer review and prestiage journals are useless/obsolete in era of internet); it's already crazy slow.
So let's see: you want people to incentivize two things (1) no false claims in original research (2) to have people try to reproduce claims.
So here's a humble proposal for a funding source (say...the govt): set aside a pot of money specifically for people to try to reproduce research; let this be a valid career path. Your goal should try to be getting research validated by repro before OTHER research starts to build on those premises (avoiding having the whole field go off on wild goose chases like happened w/ Alzheheimer's). And then, when results DON'T repro, blackball the original researchers from funding. (With whatever sort of due process is needed to make this reasonable.)
Punishing researchers who make mistakes or get unlucky due to noise in the data is a recipe for disaster, just like in other fields. The ideal amount of fraud and false claims in research is not zero, because the policing effort it would take to accomplish this goal would destroy all other forms of value. I can't emphasize enough how bad an idea blackballing researchers for publishing irreproducible results would be.
We have money to fund direct reproducibility studies (this one is an example), and indirect replication by applying othogonal methods to similar research topics can be more powerful than direct replication.
Given the way that science and statistics work, completely honest researchers that do everything correct and don't make any mistakes at all will have some research that fails to reproduce. And the flip side of that is that some completely correct work that got the right answer, some proportion of the time, the reproduction attempt will incorrectly fail to reproduce. Type 1 and Type 2 errors are both real and occur without any need for misconduct or mistakes.
> With whatever sort of due process is needed to make this reasonable
Is it not reasonable to not continue to fund scientists whose results consistently do not reproduce? And should we not spend the funds to verify that they do (or don't) reproduce (rather than e.g. going down an incredibly expensive goose-chase like recently happened w/ Alzheimer's research)?
Currently there is more or less no reason not to fudge results; your chances of getting caught are slim, and consequences are minimal. And if you don't fudge your results, you'll be at a huge disadvantage when competing against everyone that does!
Hence the replication crises.
So clearly something must be done. If not penalyzing failures to reproduce and funding reproduction efforst, then what?
Your way of thinking sounds alien to me. You seem to assume that people mostly just follow the incentives, rather than acting according to their internal values.
Science is a field with low wages, uncertain careers, and relatively little status. If you respond strongly to incentives, why would you choose science in the first place? People tend to choose science for other reasons. And, as a result, incentives are not a particularly effective tool for managing scientists.
Of course people will follow their own internal values in some cases, but we really want to arrange things so that the common and incentived path is the happy path!
And without the proper systemic arrangements, people with strong internal values will just tend to get pushed out. E.g., an example from today's NY times: https://archive.is/wV4Sn
I don't mean to seem too cynical about human nature; it's not so much that I don't think people with good motivations won't exist, it's that you need to create a broader ecosystems where those motivations are adaptive. Otherwise they'll just get pushed out.
By analogy, consider a competitive sport, like bicycling. Imagine if it was just an honor system to not use performance enhancing drugs; even if 99% of cyclists were completely honest, the sport would still be dominated by cheaters, because you simply wouldn't be able compete without cheating.
The dynamics are similar in science if you allow for bad research to go unchallenged.
(PS: Being a scientist is very high-status! I can imagine very few things with as much cachet at a dinner-party as saying "I'm a scientist".)
Internal motivation and acting according to your values is not necessarily a good thing. For example, repeat offenders are often internally motivated. They keep committing crime, because they don't fit in. And because their motivations are internal, incentives such as strict punishments have limited effect on their behavior.
Science selects actively against people who react strongly to incentives. The common and incentivized path is not doing science. Competitive sports are the opposite, as they appeal more to externally motivated people. From a scientist's point of view, the honest 99% of cyclists would absolutely dominate the race, as they ride 99% of the miles. Maybe they won't win, but winning is overrated anyway. Just like prestigious awards, vanity journals, and top universities are nice but ultimately not that important.
> Science selects actively against people who react strongly to incentives
I don't think this is true at all! If it were true, we would not have the reproducibility crises and various other scandals that we do, in fact, have.
Scientists are humans like any other, and respond to incentives.
Funding is a game -- you have to play the game in a way that wins to keep getting funding, so necessarily idealists that don't care about the rules of the game will be washed out and not get funding. It's in our collective interest, then, to make sure that winning the game equates to doing good science!
In the almost 20 years I've done academic research, I've met thousands of scientists. Some of them have been involved in various scandals, but as far as I know, none of the scandals were about scientific integrity. When it comes to academic scandals, those involving scientific integrity seem to be rare.
The reproducibility crisis seems to be mostly about applying the scientific method naively. You study a black box nobody really understands. You formulate a hypothesis, design and perform an experiment, collect data, and analyze the data under a simple statistical model. Often that's the best thing you can do, but you don't get reliable results that way. If you need reliability, you have to build models that explain and predict the behavior of the former black box. You need experiments that build on a large number of earlier experiments and are likely to fail in obvious ways if the foundations are not fundamentally correct.
I'm pretty bad at getting grants myself, but I've known some people who are really good at it. And they are not "playing the game", or at least that's not the important part. What sets them apart is the ability to see the big picture, the attention to details, the willingness to approach the topic from whatever angle necessary, and vision of where the field should be going. They are good at identifying the problems that need to be solved and the approaches that will likely solve them. And then finding the right people to solve them.
I guess at a very high level, the question is, do you think the current system and what it incentives is fine/optimal (and are sanguine presumably then about things like the the Lesné Aβ*56 fraud or the OP article [the failure of over half the biomedical experiments tested to repro]), or do you think it can be improved?
To me it clearly seems like there is room for improvement!
Even granting that most scientific researchers are pure of heart and noble of purpose, the kinds of science we get (and how quickly we uncover spurious results) are still going to depend on the systemic incentives of funding, publishing, & prestige -- so it's worth trying to structure those systems in a way that rewards good science as much as possible.
> The ideal amount of fraud and false claims in research is not zero, because the policing effort it would take to accomplish this goal would destroy all other forms of value.
Surely that just means that we shouldn't spend too much effort achieving small marginal progress towards that ideal, rather than that's not the ideal? I am a scientist (well, a mathematician), and I can maintain my idealism about my discipline in the face of the idea that we can't and shouldn't try to catch and stop all fraud, but I can't maintain it in the face of the idea that we should aim for a small but positive amount of fraud.
You CANNOT create a system that has zero fraud without rejecting a HUGE amount of legitimate work/requests.
This is as true for credit card processing as it is for scientific publishing.
There's no such thing as "Reject 100% of fraud, accept 100% of non-fraud". It wouldn't be "ideal" to make our spaceships with anti-gravity drives, it would be "science fiction".
The relationship between how hard you prevent fraud and how much legitimate traffic you let through is absurdly non-linear, and super dependent on context. Is there still low hanging fruit on the fraud prevention pipeline for scientific publishing?
That depends. Scientists claim that having to treat each other as hostile entities would basically destroy scientific progress. I wholeheartedly agree.
This should be obvious to anyone who has approved a PR from a coworker. Part of our job in code review is to prevent someone from writing code to do hostile things. I'm sure most of us put some effort towards preventing obvious problems, but if you've ever seen https://en.wikipedia.org/wiki/International_Obfuscated_C_Cod... or some of the famous bits of code used to hack nation states then you should recognize that the amount of effort it would take to be VERY SURE that this PR doesn't introduce an attack is insane, and no company could afford it. Instead, we assume that job interviews, coworker vibes, and reputation are enough to dissuade that attack vector, and it works for almost everyone except the juiciest targets.
Science is a high trust industry. It also has "juicy targets" like "high temp superconductor" or "magic pill to cure cancer", but scientists approach everything with "extreme claims require extreme results" and that seems to do alright. They mostly treated LK-99 with "eh, let's not get hasty" even as most of the internet was convinced it was a new era of materials. I think scientists have a better handle on this than the rest of us.
> You CANNOT create a system that has zero fraud without rejecting a HUGE amount of legitimate work/requests.
I think that we are using different definitions of "ideal." It sounds like your definition is something like "practically achievable," or even just "can exist in the real world," in which case, sure, zero fraud is not ideal in that sense. To check whether I am using the word completely idiosyncratically, I just looked it up in Apple Dictionary, and most of the senses seem to match my conception, but I meant especially "2b. representing an abstract or hypothetical optimum." It seems very clear to me that you would agree with zero fraud being ideal in sense "2a. existing only in the imagination; desirable or perfect but not likely to become a reality," but possibly we can even agree that it also fits sense 2b above.
On the data analysis side, I think making version control both mandatory and automatic would go a long way.
One issue is that internal science within a company/lab can move incredibly fast -- assays, protocols, datasets and algorithms change often. People tend to lose track of what data, what parameters, and what code they used to arrive at a particular figure or conclusion. Inevitably, some of those end up being published.
Journals requiring data and code for publication helps, but it's usually just one step at the end of a LONG research process. And as far as I'm aware, no one actually verifies that the code you submitted produces the figures in your paper.
It's a big reason why we started https://GoFigr.io. I think making reproducibility both real-time and automatic is key to make this situation better.
There's usually indirect reproduction. For instance I can take some principle from a study and integrate it into something else. The real issue is that if the result is negative - at least from my understanding - the likelihood of publication is minimal, so it isn't communicated. And if the principle I've taken was at fault there's a lot of space for misattribution, I could blame a litany of different confounders for failures until, after some long while I might decide to place blame on the principle itself. That itself may require a complete rework of any potential paper, redoing all the experiments (depending on how anal one is in data collection).
Just open up a comment section for institutional affiliates.
Yes, but it costs money. There's no solution that wouldn't.
IMO, the best way forward would be simply doubling every study with independent researchers (ideally they shouldn't have contact with each other beyond the protocol). That certainly doubles the costs, but it's really just about the only way to catch bad actors early.
> Yes, but it costs money. There's no solution that wouldn't.
True, although, as you doubtless know, as with most things that cost money, the alternative also costs money (for example, in funding experiments chasing after worthless science). It's just that we tend to set aside the costs that we have already priced in. So I tend to think in such settings that a useful approach might be to see how we can make such costs more visible, to increase the will to address them.
The flaw being that cost is everything. And, in particular, the initial cost matters a lot more than the true cost. This is why people don't install solar panels or energy efficient appliances.
When it comes to scientific research, proposing you do a higher cost study to avoid false results/data manipulation will be seen as a bug. Bad data/results that make a flashy journal paper (room temp superconductivity, for example) bring in more eyeballs and prestige to the institute vs a well-done study which shows negative results.
It's the same reason the public/private cooperation is often a broken model for government spending. A government agency will happily pick a road builder that puts out the lowest bid and will later eat the cost when that builder ultimately needs more money because the initial bid was a fantasy.
Making costs more visible is a good goal, I just don't know how you accomplish that when surfacing those costs will be seen as a negative for anyone in charge of the budget.
> for example, in funding experiments chasing after worthless science
This is tricky. It's basically impossible to know when an experiment will be worthless. Further, a large portion of experiments will be worthless (like 90% of them).
An example of this is superglue. It was originally supposed to be a replacement glass for jet fighters. While running refractory experiments on it and other compounds, the glue destroyed the machine. Funnily, it was known to be highly adhesive even before the experiment but putting the "maybe we can sell this as a glue" thought to it didn't happen until after the machine was destroyed.
A failed experiment that led to a useful product.
How does someone budget for that? How would you start to surface that sort of cost?
That's where I think the current US grant system isn't a terrible way to do things, provided more guidelines are put in place to enforce reproducibility.
> > for example, in funding experiments chasing after worthless science
> This is tricky. It's basically impossible to know when an experiment will be worthless. Further, a large portion of experiments will be worthless (like 90% of them).
I don't mean "worthless science" in the sense "doesn't lead to a desired or exciting outcome." Such science can still be very worthwhile. I mean "worthless science" in the sense of "based on fraudulent methods." This might accidentally arrive at the right answer, but the answer, whether wrong or accidentally right, has no scientific value.
Yes. Accepting the uncertainty and publishing more than few.
Often famous/more cited studies are not replicable. But if you want to work on similar research problem and publish null/non exciting results, you're up for a fight. Journals want new, fun, exciting results but unfortunately the world doesn't work that way
I’ve had trouble verifying that these chips still have reliability problems after the microcode updates. There seem to be a lot of anecdotes, but pc manufacturers are reporting normal rates of warranty return. It’s possible long term reliability is worse but it’s pretty easy to stress test a CPU.
Anyway, would love to hear if anyone has some decent quality data about these chips reliability.
Parent stated "I’ve had trouble verifying that these chips still have reliability problems after the microcode updates"
Your video links to Intel CPUs doing remarkably poorly before the latest microcode update. Parent was presumably trying to determine if Intel has actually fixed (or at least mitigated) the problem, which your video (much as I love Gamers Nexus) doesn't really have an answer for.
Is the US health system free market? The government provides healthcare via Medicare and Medicaid for seniors, the people for whom life expectancy and healthcare quality have the highest correlation.
It’s a mixed bag, but the funding source doesn’t necessarily make it a controlled market, to the degree that Medicare and Medicaid pay non-government providers and allow competition (which again, is mixed). Medicare and Medicaid coverage make up one third of the US population. The other two thirds are on group/employer insurance, private insurance, or no insurance at all.
For non-seniors, the medical insurance system certainly sometimes doesn’t feel like a free market from the consumer perspective, but the insurance companies are private for-profit institutions, and the medical providers are too, so it may well fit the definition.
> the people for whom life expectancy and healthcare quality have the highest correlation.
What do you mean by this? Fatalities among the young will have a much larger impact on lowering nationwide life expectancy than fatalities among the elderly.
The quote is from a Crikey reporter, I (an Australian) wouldn't agree that the US health system is classic free market .. but it appears to have more regulatory capture by vested commercial non government profit orientated interests than by social policy best outcome for the masses civil authorities.
( Describing various systems in various countries as either communist of free market capitalist is pretty simplistic, it's not much as a linear spectrum either )
I'd also argue that the foundation for a high life expectancy doesn't start with good health care for seniors .. unless the metric is "life support via artificial means" .. life expectancy is grounded in healthy living and excerise from an early age well maintained with good health programs.
yes and you can use it in text-text mode if you want. a key benefit is for turn-based usages (where you have running back and forth between user and assistant) you only need to send the incremental new input message for each generation. this is better than "prompt caching" on the chat completions API, which is basically a pricing optimization, as it's actually a technical advantage that uses less upstream bandwidth.