One of the biggest problems in science (in general) is that your reputation and therefore funding is highly tied to a particular subfield. Everyone in a subfield knows each other and gives positive grant reviews and external tenure letters to other practitioners. If you try to move on to new areas of research, your reputation and funding are gone. The outcome is that areas of research interest become quasi-permanent, even when they are no longer interesting. I talk to the people today that still work in my specialty of nuclear physics (relativistic heavy ion physics) and I'm shocked at how little intellectual progress has been made. At the risk of offending people it's time to give up and declare victory, but the consequences would be dire (hundreds of professors all over the world without grants).
I think you are absolutely on the right track, but in my view the problem is less tightly coupled to the funding and more institutional.
As a professor it is simply extremely unnatural to just drop your main area of research.
First, your academic network will almost entirely stem from the subfield you have been working on. This affects the expectations of your PhD students, the recruitment of postdocs (who are often swapped between colleagues in the subfield), and the conferences you visit -- where you will see all the familiar faces of colleagues you have known for years.
Second, your mind has been shaped significantly by years of working in your own little subfield. It is easy to come up with ten projects in your area, rank them by difficulty as well as the interest they would generate amongst your colleagues, and help your students to execute them. It is significantly harder to do anything like that outside your subfield; for one, you will not have the mentoring that a typical PhD student has.
hundreds of professors without funding would not be dire consequences, imo. I feel like it would be a market correction. what will universities do? fire them and let the administrators teach?
I read you comment as another anecdote towards the general belief that higher education requires a refactor for many parties and the general growth of knowledge creation.
Or you’ll dump the theoretical physics and spend on hair growth products and techniques.
The market doesn’t care for the long term- it’s an optimization system for competing needs, often with immediate needs overwhelming long term.
Markets also use regulatory forces to ensure that everyone isn’t mainlining addictive chemicals or dis-incentives are created for not wearing seatbelts.
well when you are at professors level, then it is not only about yourself. Especially in physics with you have small teams, some grad students what are under your wing. Their livelihood and career depends on you. And some of them are newlyweds with baby incoming. So then comes human factor.
ah - but you speak towards the granting vacuum en absentia of an appropriate appropriation reaction by the institutions and of the government. A proper refactor that provides umbrella funding for a decade long transition would readily facilitate a painless migration, iff current academia cashflows are supplanted by a superior system...but the deus ex is i. finding said superior system.
Universities and higher learning isn’t and shouldn’t be a market. I’d argue that one of the reasons for the sad state of academia today is that too many MBA are running everything, including those things that aren’t businesses and shouldn’t be run like such. If researchers didn’t need to always chase after the next grant, a lot of the shortsightedness and lock in into specialities wouldn’t even be a problem in the first place.
It says something about how thorough neoliberalism won that few people react today when, again, it is taken for granted that everything is a market and what the market decides is right.
We have yet to bring back ice delivery for the sake of the delivery staff. We have fridges now. Life does not end because jobs are eliminated, it just changes.
The economic term is underemployment, where your only options are roles which pay less than what your training level and capacity are.
Lose your job as a foreman at a factory, and you can switch to being a manager at Mc D.
Automation offers more of this, and it’s essentially going to go feed the owners of that automation.
This is all optimization of resources and ownership of course so it’s essentially a question of how the share of the new efficiency derived surplus is split.
Part of which could go to things like keeping theoretical research programs alive.
It certainly could, but should it? If that research is into something that's a dead end, should we keep researching it to avoid underemployment of researchers? It does seem to me that underemployment as a loss function by itself is particularly prone to local maximums.
I mean these are physics and mathematical specialists.
I’d rather have a country that can keep such people engaged and active than lose them.
I mean what other options do you see? They could join hedge funds. Open an ML based startup.
In the end someone else has to work on this material- the problem isn’t going to go away.
And it sounds like this requires tools to do measurements of astral phenomena. Considering that America lost its telescope recently, and further investment is unlikely, I don’t think the investment in ancillary fields is high either.
No, these people should be exploring new sub-fields. When you reach the end of a vein of rich ore you don't go ask the government to pay you keep digging in that same direction. You start digging somewhere else. That is what we need string theorists to do.
You lost the thread. We’re not talking about eliminating funding for physics. We’re talking about not funding very specific fields that haven’t produced anything useful for decades.
Someone with an IQ high enough to become a physics professor can't retrain, even within another sub-field of physics?
Moving about a bit is not losing people, and if they're not providing something useful then they've already been lost anyway, and the opportunity cost of what they might be doing usefully elsewhere is also spent. May as well employ them at McDonalds, except they'd probably be rubbish at it.
> Automation offers more of this, and it’s essentially going to go feed the owners of that automation.
An economist visits Communist China and they proudly show him a railway being built. He notices the workers are using shovels to dig the holes, instead of construction machines.
- Why don't you use construction machines?
- We care a lot about the workers here, using construction machines would make some of them unemployed!
- Well, in that case, I suggest you make them use spoons instead.
If there is funding, there is apparently a need for research.
The solution is never for the individuals to induce change. This doesn't work for consumers (i.e. you can't say "If everyone stops buying plastic bags, we do something for the environment", because well, nobody cares... You have to make selling plastic bags either illegal or add a significant surcharge that causes consumers to avoid them for reasons that actually affect them, like "too expensive")
And it doesn't work for anything else either. As long as there is funding, this will go on. The "solution" is to stop funding, but apparently since funding is still flowing, someone still values the research. So what is even the problem?
No progress has been made? Well, if funding is still there, whoever funds this seems to be beyond happy with the progress. This is a non-issue.
> If there is funding, there is apparently a need for research.
This is very close to "There must be a pony in there somewhere". [1] If the question is whether the field is worth continuing to fund, the answer can't be, "Well since people are still funding it, it must be worth funding." By the same logic, we should just keep putting billions into WeWork.
Research is hard to value, and fundamental research especially so. The logic of "if people buy it, it must be good" only patchily applies to normal commerce, where results are relatively easy to measure and feedback loops are short. It's wholly inadequate for feedback loops on the scale of decades.
This makes sense until you realize the people funding the grants are not the people approving the grants. It is really a lot easier to spend other people's money poorly than to spend your own poorly. Besides, if you want to know if a grant is worth funding who are you going to ask? Probably esteemed people in that field or something related, who all have the same poor incentives and institutional inertia to contend with.
And when politicians are funding long term science, they aren't funding the actual long term benefits, they are funding the APPEARANCE of producing long term benefits (and the funneling of money to those who can provide kickbacks). As long as the pretense can be maintained, anything can be funded, regardless of its actual utility.
The logic is not "if people buy it, it must be good", but more "if people buy it, there must not be something else that they can buy that gets them what they want for less".
I think that's within the working definition of "good", but even if it isn't, I think my concerns still apply. It might on average be true in certain narrow circumstances, but there are so many exceptions it conceals at least as much as it reveals.
OR perhaps the issue that the same people who are in the clique of the said field are also evaluating the research proposals. So the funding doesn’t run dry?
Not that that’s a good or bad thing. You wouldn’t want a “non-expert” to evaluate the proposal. And perhaps a tapering of the funding is better indicator of interest/progress of the field?
Generally there's an outside source of funding that's not part of the field. Thus top scientists find themselves becoming "rainmakers" rather than doing research. You tell your staff to wear their white coats and glasses, lead a tour around the facility, point out how big the machines are, etc.
Well, there _is_ quite a bit less funding for anything in Physics nowadays, at least compared to the Cold War era. Some professors have indeed been forcibly pushed out of Physics for this reason, but mostly it has an effect of making things more hopeless for younger folk (due to the cronyism problem mentioned above).
In computer science, researchers who had spent decades working on feature engineering for images got completely blown out of the water by deep learning. All those decades of papers would never get another cite. Everyone moved on.
String theory doesn't actually do anything though, so there's no way to say it's better or worse than anything else. It's like someone saying they're working on solving the halting problem or something and eventually they'll get there and they've been working on it for 20 years without any code that actually works. At some point, you just have to give up and do something else. They keep getting grants to work on this stuff though for some reason. Someone needs to interview the people who are giving out string theory grants and ask them why the heck they are still giving these people money.
There might still be some value in it. I think people will use tools like AdS/CFT correspondence for a while. And it is worth remembering that in terms of actual dollars, fundamental physics research is not that big of a spend. Like, part of the reason why these articles are important, is that the pie is so small to start with. So I don't worry too much about the string theory grant money, if anything I would not mind the absolute dollar amount going up if the relative fraction decreased with it.
The one thing that kind of irks me is this Kaku book “The God Equation.” I have only seen one equation that is so universally applicable that it could deserve that title, and even then I would be hesitant about that because it might give people the wrong impression. (It is the transport equation—it keeps appearing and appearing, a bunch of other equations are special cases of it, it is involved in one of the million-dollar Clay Mathematics prizes so there is clearly something hard/intractable about it, and it has a term which refers to creation and destruction. It says, a box flows downstream, the time rate of change of stuff in the box is equal to the flow J of stuff through the walls of the box plus the rate Φ of stuff being created/destroyed in the fluid. Or, ∂ρ/∂t + (v · ∇) ρ = - ∇ · J + Φ.)
> for images got completely blown out of the water by deep learning
Deep learning might be a good example. The underlying neural networks has been a “dormant science” for decades until its breakthrough.
I wouldn’t be so sure either that image feature processing won’t get another cite. More likely they will show up in preprocessing again as the boundaries of deep learning get pushed.
The thing we see with image features is that early layers of a convnet learn almost the same thing that specific engineered features that we used to implement, but better - the earlier feature engineering used "clean" human-designed structures, but DL automatically got to the same point and also tweaked the coefficients to be slightly better, and once you can do that, there's no reason to go back to what we manually engineered. For example, why would you explicitly use a Sobel edge detector in preprocessing if you could use (for that same preprocessing) a set of convnet kernels that include (among other things) edge detector kernels that are very similar to Sobel but slightly better?
So no, I would not expect the currently known engineered image features to show up in preprocessing again - they simply do not add any value whatsoever (which can be experimentally verified) over the "learned preprocessing features". Perhaps someone will engineer a new substantially different type of feature that can't be currently learned from data, and that would get used and cited - but it won't bring citations to the old engineered features.
So if you were to write an OCR software today, you would just feed a neural network with a whole raw phone image shot of a floppy paper? Or would you try to flatten the image first, then extract words and letters and then synthesize that back to a bigger model (also using NN) that finally produces your text?
This seems a quite orthogonal issue to what I was talking about - your comment seems to be about the separation of pipeline in subtasks, while I was talking about methods used for particular subtasks; I would not consider "try to flatten the image first" as part of feature engineering, I would consider it as a preprocessing task that might be done as an engineered feature, but also might be done as a machine-learned subtask, or skipped entirely, or integrated in a more general transformation that's believed to inherently correct for non-flat images.
I don't work on OCR much (and when I did, it was for book digitization which doesn't have these specific challenges), so I don't have an opinion on what's the state of art for a task like you describe (I'm imagining analysis of receipts or something like that), however, across many domains of ML (incliding but not limited to computer vision) we are seeing the advantages of end-to-end optimization.
So, for example, the image preprocessing we would like to allow for input of raw phone camera images includes correction for lighting, angle, and crumpledness of paper. Obviously, I agree that these things should be done, but I do not necessarily agree that they must be done as separate engineered features.
I don't have an opinion on what's the best option "today in production" for OCR - it's plausible that the engineered feature way is still the best at the moment, but if we're looking at where the field is moving to, then I'd argue that there is a strong tendencytowards (a) using numerically optimizable methods for these corrections as opposed to hand-selected heuristics; (b) optimizing these corrections for a final OCR result as opposed to treating it as a separate task with separate metrics to optimize; and eventually (c) integrating them in an end-to-end system instead of a clearly separable stage of "correcting for X". I'm not certain where the state of art for noisy OCR is today on this, but it's a one-way direction; The key point of my comment above is that once (if!) we get these things to work I would not expect to go backwards to specific handcrafted features ever. It's plausible that some tasks are better treated as separate and can't be integrated well (perhaps the selection of text segments in your OCR example is like that), but for the features that we already have managed to successfully integrate (which was the topic discussed in the grandparent comment), IMHO there is no reason to ever go back.
Perhaps a relevant example of a field that has undergone this transformation is machine translation - where just a few years ago production pipelines included literally hundreds of separate subsystems for niche subtasks conceptually similar to those you mention regarding OCR, the shift to neural ML was accompanied by making these subsystems redundant as doing the same thing in an integrated end-to-end optimizable way gets better results and is simpler to implement, maintain and debug due to a more unified architecture.
Similar trends have also happened for face recognition and for speech recognition - I would presume that OCR is structurally similar enough to see the same fate if it hasn't already.
I’m not so sure; I think a _lot_ of people would be interested in advanced in non-ML image analysis tech. While ML has been effective for a recent period in industry, it has a number of issues such as intensive training costs, extreme difficulty in fully understanding the behavior of a model (since we can only do experimental verification, and only on behaviors we know to be interesting already), and ethical issues such as unintended gender/racial bias. Just off the top of my head.
I think what you are pointing is really the most toxic part of tech: marketers and investors have found that tech is a good way to aggregate money, and so they have a thrown a lot of funding at tech that can aggregate for them. However, we haven’t actually proven that that tech is the best solution or a sustainable solution. We don’t understand most of what we do with computers very well, we just approximate until it works well enough (for the marketers and investors, of course).
ML and deep learning are very valuable, of course, but their recent market dominance doesn’t indicate that they are the final or most correct solution to the problems they are being used for. It indicates that people want to spend money on it right now.
The halting problem is a poor example as there is already a simple proof showing that such a solution does not exist.
A much better example is P=NP. This is a very rich area of research with many closely related problems that has been worked on for decades and will continue to be worked on for the foreseeable future.
A better analogy might be two kids standing at the bottom of a tree arguing about what the fruit tastes like, but they’re too small to jump or climb to get the fruit and find out.
It’s not wrong to want to find out what the fruit tastes like. Both kids have good ideas about what it might taste like, and there are many such plausible ideas. But the tree is tall, and remains out of their reach, and all they can do is speculate. Meanwhile, there is no shortage of other interesting things they could be doing.
That String Theory continues to adhere to known results keeps it in the candidate pool. If it can make the same predictions as the current framework, then it makes sense to keep working on both because neither has a known advantage over the other in predicting unknown phenomena. The only way to find out is to continue exploring all such theories. Since human talent and research is limited, it becomes an economic problem. Is it worth having half of the brain power working on two equally plausible theories? Who knows... but what I do know is that diversity in this field is creating new ideas and new mathematical tools and that has to count for something.
I am just finishing up my MS in Statistics, and at my university they don't have any course(yet) that goes into deep learning. The only time the course briefly talk about neural net is in the context of Logit regression. And I have always wonder if the time spent proving Gauss Markov theorem is better spent somewhere else.
Do you know any source(book or otherwise) that continues to build on top of topics taught in a Regression and Linear algebra course(such as Multiple Linear, Weighted Least square, Logistic, generalized least squares, principle component regression, singular value decomposition) and slowly move towards the current state deep learning?
>And I have always wonder if the time spent proving Gauss Markov theorem is better spent somewhere else.
No.. learning how to prove it will serve you well in the future. All deep learning is is figuring out different ways to combine all of this prior art inside of a computation graph.
Step N is linear regression, SVD etc... step N+1 is deep learning.
I wish I knew what it was like to have the math background, and don't at all mean to suggest one should not bother, but as a person on a team of people who have made a few production NN models over the past year: it is completely possible to conceptually and practically grasp iterated reverse differentiation with convolutions well enough to use deep learning to do novel work, without having the deep background math knowledge. For example, I barely know what a Support Vector Machine is, nor could I do a linear regression without a lot of hand holding. But I can design a passable tensorflow model and improve it.
It would definitely be very hard to do any meaningful research from this position, but I know enough to be useful (er, or dangerous) and can read papers and code to keep up with recent advancements, and try things out like different convolution designs, layer designs, gates, functions, feedback, etc.
(On the topic of reading: Karpathy[1] and Colah[2]'s posts have a wealth of introductory conceptual information and images in them, and helped our team discussions a lot while we've learned.
I think it will vary for person to person - but when I am looking at a new idea(i.e. principle component analysis) that is built on top of old ideas(PCA is based on Singular value decomposition) - familiarity with old ideas makes me more comfortable/confident in thinking that I'll be able to understand the new idea thoroughly and hopefully I'll also see the incoming pitfalls of the new idea. And I really enjoy the process of adding new concept/idea in an existing larger picture.
Sure makes sense. There are multiple layers to deep learning:
The math and proofs.
The algorithms.
The software tools/frameworks
Building NNs with those tools.
All of those have different levels of conceptualization and utility.
As some extreme examples: Anyone can build an iPhone image classifier drag and drop today. That’s one level. Developing an alternate to backprop is another level.
I liked "Hands on Machine Learning" by A. Geron. It's quite practical - you won't find careful progressive proofs, but it is very good at introducing the methods of ML and relating them to real problems. It provides you with a way to get your hands round the techniques - with an MSc in Stats you should then be well placed to understand where the gaps and problems are and to use things wisely.
It isn't quite state of the art, and it is very light in terms of how to evaluate and deliver the solutions. But I thought it was good. My team at work has gone through it selecting one chapter at a time this year in our book club - it's provided a good catch up and share forum for us. Some of the folks in the team are Engineers and so haven't a strong ML background, a couple are Data Scientists and therefore were in the same place as you (+/- some use of deep learning in a few engagements) and a couple of the folks are diehard ML researchers, but focused on different things (time series, NLP, evaluation) so there was a lot of crossover for everyone.
Last time I took a Uni. course, on the side of work, I enrolled in a brand new Deep Learning course. We used Ian Goodfellows book (https://www.deeplearningbook.org/), as well as supplementary research papers.
That's kind of true but also stuff like hessian noise filters are still useful and they existed and when written about prior to the application of neural networks to images.
1. Don't try too hard to make science funding more efficient / market lead or whatever - just fund more - double the science budget tomorrow And again every five years for a generation. We are about to go through a societal refactoring with solar / electric / digital / climate change hitting at once. Spending a few quid to help out the fundamentals won't hurt anyone.
2. The above might upset people but string theory is a good example - partly yes people are held hostage by funding (much like it's hard to avoid being typecast as an actor). But the solution to that is to fund lots more (experimental) movies, not to hope studio heads can pick better winners.
I am perfectly happy for a slice of my tax dollar to go on someone thinking about Pram theory (#) for a decade simply because we have no idea whatsoever how to tackle the big ones in physics, so spray and pray is going to be a lot better than most other options.
(#) Red Dwarf episode. worth watching episode just for this joke.
I would agree with funding more science, but you'll need some hoops to jump through or anyone can just come in and grab free money to "do research" with. I think you will necessarily end up not too far from where we are now because of that.
Why? I find this attitude curious the thought that all the academics would suddenly start just taking the money and twiddle thumbs. So because of this we set up a huge administrative body around it and keep the academics from actually doing the work.
I can tell you that the vast majority of academics are not motivated by money. If that was the case they would move into industry where they could earn significantly more.
> Why? I find this attitude curious the thought that all the academics would suddenly start just taking the money and twiddle thumbs.
Depends on how you define “academic”. There are suddenly going to be a lot of “academics” showing up for their free handouts once there is no bar.
>I can tell you that the vast majority of academics are not motivated by money. If that was the case they would move into industry where they could earn significantly more.
Selection bias. The academics you are thinking of are the ones willing to put all of the work into getting grants, teaching, etc. There are an order of magnitude more PhD holders that nope out of academia after graduation because of the grim prospects.
> I can tell you that the vast majority of academics are not motivated by money. If that was the case they would move into industry where they could earn significantly more.
No, they're motivated by safety. And there's nothing safer than the ivory tower. Academics would work on self-referential research until the end of time if they could get funding for it.
They create academia children in a perpetual loop trying their very hardest to avoid life and all its complexities (especially those of people, of which they are ludicrously blind) by living in their theoretical world of nothingness.
I suspect that in many fields, funding acts as a fertility drug for researchers. The opportunities to fund research are, accordingly, pretty much always going to outstrip any amount of funding that could be provided.
If that's the case, you're inevitably going to need some way to decide which of the many worthy opportunities merit sufficient investment and which ones do not.
> the vast majority of academics are not motivated by money. If that was the case they would move into industry where they could earn significantly more.
Good point, becoming an academic may already represent enough of a hoop, though that’s just the CS perspective. In other fields there may not be much of a lucrative industry, so I’m not sure whether this will hold generally.
Could not agree more. If you see into most big discoveries, fundamentals were laid 50-70 years ago. F.e. math for string theory (twistor space) was in 1960s while theory itself gained traction in 1990s. Todays the new math developed under premise of string theory might be fundamental for future theories coming 50 year later, even if string theory would be proven wrong
In terms of dollars that’s fine. But on a societal level I think you have to think about alternative cost: The problem is not the money wasted on string theory. It’s all the other research that those researchers could have done.
That ultimately is their choice - using money as a forcing function to get the research we want is a poor approach to picking fruitful areas of research (although not uncommon)
Sure, but the point of government funding is to encourage things we want for society. If some researcher wants to pour another 20 years of salary into more string theory, maybe we shouldn’t pay for that.
Perhaps rather than prescribing areas of research, we should just ban or severely limit areas that haven’t shown any promise.
Then, as now, there was a large body of entrenched individuals who would not budge until some ingenious experiments by the British and French (et al.) to entirely destroy the theory.
Except that string theory isn't exactly mainstream - there's a small group of people who do string theory, and a large number of physicists whose work has been completely unaffected by string theory.
That's what makes this so strange. Everyone outside string theory seems to think string theory is pseudoscience. Why is it still being funded?
I don't begrudge them the first twenty years. But for the last twenty years they don't seem worried about the lack of meaningful progress because their maths are so pretty. I sort of get that but its alarming that Physics Today still hasn't printed any of this really basic criticism. This makes me suspicious of other things including high-energy physics.
I think of the closing to an essay by Martin Gardner on once mainstream scientific theories, eugenics, Freudism, etc. Which is when evidence is weak then scientific theory tends to exactly match scientists cultural biases.
With string theory there is no evidence to work off. To me it feels like with nothing better to do everyone in the field is just trying to out math everyone else.
Using "math" as an abbreviation for the noun mathematics always grates a little for us Brits (not that we'd say anything), but as a verb it totally works, I math, you math, he/she maths ...
And yet, despite all that annoyance for the ways Americans use the language, a significant number of French derived words are butchered on that side of the pond (e.g. lieutenant).
I find that in big organizations, this is is a very frequent pattern. There is a commonly accepted approach that is obviously wrong to a subset of people. However, the people that are able to see the "wrongness" either don't have the political skills to accelerate a mental shift for the entire organization, or they choose to apply their limited "political leverage" to a different (often smaller and more tractable) problem instead.
Yes, it's true that string theory hasn't panned out in the same way as general relativity or the standard model, but it really bothers me when people criticize it without even tacit acknowledgement of the fruitful discoveries that came out of the research that has been applied to other fields. The AdS/CFT correspondence alone is worth the effort that was put in, not to mention mirror symmetry in algebraic geometry and everything that was discovered about Calabi-Yau manifolds.
String theory research isn't directly useful, but other fields of physics and mathematics reap the benefits.
Public grant making is a very political process. People imagine there is an apolitical body that makes grants to physicists based on the merits of their work ... but that's not the case.
In any resource allocation scheme where the grantors have no skin in the game, politics and cronyism will dominate in the long term.
That's probably what's happened here. String theorists get grants not because string theory produces practical results, but because there is now a network of string theorists and sympathizers in charge of the institutions that control professorships and grant making.
That's why resource allocation should be left to those with skin in the game. A market is one mechanism for forcing allocators to expose themselves to the downsides of bad investments ... but there are other mechanisms if people have an allergy to markets.
What is there around to replace string theory with? I predict that if there ever is a viable alternative available with even 0.001% more evidence supporting it, strings will vanish in a day. Presently all other approaches to quantum gravity are in a less developed state and make even fewer predictions.
We have several important contenders, but it is kind of funny that if you rank all contenders by relevance to empiricism, as is rightly demanded by Woit and Hossenfelder, string theory comes out in the lead.
One could argue that this is somewhat due to selection bias, as string theory has been given the most research time/funding historically, thus it's had the longest time to mature. If the other contenders had the same research effort it's very possible that at least one might be found to actually be more relevant to empiricism.
String theory exhibited quantum mechanics plus gravity since the day it was invented, it's not like LQG where setting up three macroscopic dimensions is a nearly insurmountably difficult problem. People think that the problem can be solved, otherwise nobody would bother with it, but string theory is obviously the first and best thing to try.
String theory at least predicts gravity in three dimensions. Other approaches to quantum gravity are not even far enough along to do that - it was an immense and relatively recent accomplishment to have LQG on a circle.
String theory has not made predictions that are in any sense meaningful to science.
“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory--an event which would have refuted the theory.”
>String theory has not made predictions that are in any sense meaningful to science.
You're welcome to set the bar of meaningful at any level you like; such a statement cannot in principle be disagreed with. However the point stands that string theory has made more accurate empirical predictions than any other theory of quantum gravity. That doesn't mean no other theory of quantum gravity will ever beat it, just that the focus on string theory among people and funders who are precomitted to studying quantum gravity is rational.
I think the above is using "meaningful" to refer to "falsifiable". The fact that string theory is so much more advanced theoretically and still doesn't make any falsifiable predictions makes me less optimistic about it than less mature theories like LQG, which (according to some experts) already seems closer to making falsifiable claims.
String theory leads to the falsifiable claims of gravity and quantum mechanics. I don't know why this non-falsifiability argument keeps getting repeated! It would be nice for string theory to make more claims than just that, but they're still working on it. LQG is an example of a framework that truly makes no claims, because there isn't even a LQG setup for three dimensions of space. To my knowledge it hasn't even been proven that there is an LQG setup for three dimensions of space.
Yes. It has "falsifiable claims" in the sense that, if existing theories of e.g. general relativity and quantum mechanics were proven false, it would also be proven false. Generally when people refer to "falsifiable claims" they're referring to falsifiable claims that are not already made by a more parsimonious theory. As far as I understand the field, string theory has made no such claims.
And yes, that's what I meant by LQG being a less mature theoretical space. Either LQG will fail to find a solution for 3D space (so sad), or it will and its solution will make falsifiable predictions that are not part of existing theory (yay!), or it will propose a set of solutions that make no new falsifiable predictions (in which case it enters the same state as string theory). Both of those first two possibilities would be forward progress.
Look around you and note that there appear to be about three dimensions, and furthermore there's gravity. Also, quantum mechanics is true. String theory is the only theory that satisfies both of those conditions at once. Other approaches to quantum gravity aren't that far yet, but could conceivably yield those predictions at some point in the future. Today, though, there's only string theory.
You are still using “prediction” in a sense that I would say is not at all “risky”.
Put another way, so far string theory is an elaborate kind of curve-fitting — a mathematical construction that matched the empirical data available at its inception, but one that to this day has never predicted a novel measurement. Unencumbered by the theory, we would not have expected any recent experiment to produce different results.
Maps are literally curves fit to surveyed datapoints, and nobody questions their value. So what if string theory is nothing more than a map of physical laws? GR can't predict quantum behavior, the standard model doesn't have gravity, string theory does both and so is presently the only "theory of physics" in existence.
I thought I remember string theory positing more than three dimensions. Does string theory explain our perception of three at human length and time scales?
Yes, in string theory the extra dimensions are very small, leading to three macroscopic dimensions the same way the thinness of a piece of paper leads to two macroscopic dimensions.
Considering that it started with 11, or was it 26 dimensions, and had to go through some hoops to get it down to three, that statement is almost absurdly comical.
In book "Everything is Now" by Bill Spence writes that research in string theory led to developing new mathematics which led to helping to manage data in hadron collider which led to detecting Higgs boson. It is not direct cause and effect correlation but it helped with due process.
You can’t base the correctness of a theory on what the techniques it developed lead to.
For example, astrology probably developed techniques of recording astronomical events and likely some mathematical techniques, but that does not mean that astrology is a correct theory.
A long time ago in a galaxy far away I entered U.C. Berkeley as a physics major. Then I dropped acid and became very interested in consciousness, but didn't see that being taught in the physics track, so I dropped it. Now I see that physics is coming around to talking about consciousness. I don't know why they stopped, because I learned later that early 2oth century physicists did talk about it. Max Planck among others. Materialism reigns supreme and I don't know why. No one has to start being a theist when they stop being a materialist.
I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness
― Max Planck
As I said, physicists thought about it 100 years ago and then the atomic age came along and physicists didn't pursue consciousness as an area of interest. That's been changing recently. Some catching up must be done. As you point out, tests do not exist yet. It doesn't matter if you give up on it. It's being pursued. I think it's an interesting consideration.
"The tribalistic sociology that has led to a large group of people calling themselves “string theorists” when what they do has nothing to do with string theory is also something I would have thought impossible."
It's my personal bias but I see every group this way.
As someone intrigued by 'cults' early in life, I've come to see that all groups exhibit quite a lot of in/out group behaviour, the weird things is when it happens among those most socially unaware people (often science) - combined with a highly objective discipline (again science) ... people are just unwilling to accept that social artifacts can dominate their sphere.
Now, the existential issue of the fact that their 'entire worldview' and 'life-long work' is tied up in something, delusions can become very strongly internalized.
String theory is at least as interesting for the 'sociology' part as it is the 'string' part.
String theory, and the 'reproducability problem' has kind of shaken my faith in science. I grew up really 'believing' in Science, I view it much more differently these days - including those who have this unassailable faith in it as likely being a little deluded themselves.
> It's my personal bias but I see every group this way.
I have the same "bias".
What people seem to forget is that the scientific method was supposed to be a way to develop theories that made good predictions in spite of the fact that humans, both individually and in groups, are unreliable and do all sorts of crazy things for all sorts of crazy reasons. Calling oneself a "scientist" or having an advanced degree does not make one less susceptible to that. One has to set up institutions that will harness the craziness of humans and human groups in a direction that, on average and over the long term, is productive.
But what most people call "science" today is not such an institution. For example, the real message of the "replication crisis" is that most of what the average person is told is "science" is actually nothing of the sort. A sane institution of science would never have labelled that stuff as "science" in the first place; it would have said "sorry, we're still working on this, no results yet". And a lot of times it would say that and there never would be any results, and science would say "no results from this line of research; that's to be expected since many lines of research that initially look promising don't pan out".
The are several things about the replication crisis. First, it is important to remember that science always had unethical behaviour, intrigue and politics. In the end the best science always won, but it might have taken significant time. One could also say that by the very nature of us speaking about it, it can't be an existential crisis.
Finally, I would argue that the large problem stems not from within science, but from how "science administratio/funding" has changed. Essentially, the idea of making science more (a)countable/measurable has lead to a state where scientists spend large fractions (>30%) of their time applying for funding. Also the funding is very much tied to some measurable outcomes, so there is just no time left to dedicade to reproducing results, because you can't get funding for it.
I believe we as society need to come back to a view of seeing science as a general good, without the need to produce some immediate measurable outcomes (in fact this is true for many aspects of society) . Unfortunately the MBAs are running the show almost everywhere so I don't see this happening anytime soon.
> science always had unethical behaviour, intrigue and politics
Yes, this is part of what I was referring to when I said that humans do all kinds of crazy things for all kinds of crazy reasons.
> In the end the best science always won
I'm not sure I agree. I would agree if you limited your claim to hard science--areas where we can nail down the best theory by lots of data and controlled experiments--but many areas that are called "science" are not amenable to that.
> by the very nature of us speaking about it, it can't be an existential crisis
I'm not sure I agree with that either.
> I believe we as society need to come back to a view of seeing science as a general good, without the need to produce some immediate measurable outcomes
I agree with this, but I would also add that science should be viewed as a tool for finding good predictive theories, no matter what those theories end up predicting. Unfortunately much funding of science now is given with the expectation that a particular kind of result will be what "science" predicts.
I've been thinking about it lately how this may be a linguistic game. Science is just a label and when anything turns out to be wrong, we just say, "well that's part of the scientific process". In this way science can never be really wrong, because it is always open to being proved wrong and if so, it just incorporates that new fix/criticism as a new part of science. (Similarly philosophy seems to be impossible to criticize well without the criticism becoming labeled as philosophy itself)
I think it's rather that we have had a bunch of great curious people who worked on interesting stuff and found out a lot about nature. But we can't equate that with institutional credentialed authoritative "Science".
I guess there is some underlying scientific ethos, that can be learned between the lines, but I don't think it's a clear method (as in the intro textbook definition).
> Science is just a label and when anything turns out to be wrong, we just say, "well that's part of the scientific process".
No, we don’t just say that. Recognizing that “knowledge” about the universe is always tentative and contingent and that an understanding of the universe is to be replaced when one with strictly better at predicting observations is identified is the scientific process. And it's a pretty big change from the approaches taken prior to it's adoption.
> In this way science can never be really wrong, because it is always open to being proved wrong
“Science” is almost entirely a combination of openness to being proven wrong and a particular understanding of what it means for something to be proven wrong. Since it is an epistemology, it cannot be “really wrong” if it is consistent, except in terms of a different epistemology.
> I think it's rather that we have had a bunch of great curious people who worked on interesting stuff and found out a lot about nature.
We had a lot of that without the scientific method, too. What the scientific method did is to provide a framework in which the work of great (and not so great) curious people could be effectively aggregated so that they built-in each other, whereas pre-scientific approaches did that notably less well in terms of building an ever-better understanding of the physict universe, because work was judged by standards other than predictive power.
> But we can't equate that with institutional credentialed authoritative "Science".
Science is exactly about authority having little to do with institutional credentials.
It's an equivocation between two senses: science as the brave "belief in the ignorance of experts", "you yourself are the easiest to fool" idea on the one hand and what we should rather call academia or academic science community and the associated rituals, connections, rules, prestige etc. on the other. And third, the consensus body of knowledge in textbooks and schools.
Much of the 3 was created using approx 1, before 2 really. 3 is the original source of respect for the masses, the landmark results, the household names, Newton, Galileo, Copernicus, Darwin, Einstein etc. But then come along 2 with their little games and citations and positions and visa requirements and recommendations, CVs, conferences, journals, publish or perish, the push for sexy results for PR announcements and the next grant, p hacking, tyrant supervisors who only care about having a paper at any cost, massaging the data, not speaking up because if funding dries up for the institute, the newly birthed PhD student may lose her job and be deported, the big name prof must be worshipped etc. They co-opt the reputation accrued by the giants, and then if anyone has any criticism, they are scolded for being against the ideal 1, the noble endeavor and ethos and upright honest methodology. In many cases people good at playing game 2 need quite different skillsets than people who'd be good at 1.
But who will produce the next big item for category 3?
Is 2 a good mechanism to encourage practicing 1 to get to 3? I don't think so.
> I guess there is some underlying scientific ethos
The ethos is simple: all scientific models are judged by their predictive power, and nothing else. If your model does not make accurate predictions, it is wrong, period. Nothing else matters.
I agree that there is no single "method" that guarantees that models will satisfy the above criterion.
"The ethos is simple: all scientific models are judged by their predictive power"
Then String Theory is rubbish because it predicts nothing?
Every single theory in science started out with some 'predicate knowledge' and based on 'intuition', scientists reached out a few leaps with some ideas.
Some proved right, many proved wrong.
Some took a while to come together.
"String Theory" is different - it was not based on 'intuition' or any predicate foundation.
'Strings' are a fantasy - just completely made up.
Some people took a fantasy 'new origin' and put tons of math around it.
Since then, they've never been able to map it to reality, or even conceive of an experiment let alone do one.
What it looks like is religion - not only in it's foundation, but in the behaviour of it's acolyltes.
How does a Priest react if you tell him 'God does not exist'?
How does a String Theorist react if you tell him 'There is no String Theory it's just your imagination'.
Perhaps in the greatest of ironies, I actually believe God is more likely to exist than String Theory (!) or at least, there might be some kind of actual spiritual reality to our existence.
Strings? It's just an idea that snowballed into something.
It tells us how to judge it, but not how to make it.
How you come up with ideas and directions is a mystery. It's luck but not just. Kinda like investing.
Feynman also just described that step as "Guess it".
Its not clear at all that our institutional organized science process (grants, papers, academic positions and evaluations) correlate to better success at finding out stuff.
I was assuming hard science here (considering that's what the article was about). I agree with you that in the broader sciences this is a different debate. That said I do not share the distain for the "softer" sciences often found in the hard sciences. -- a bit of the side note I find the disregard for softer sciences much stronger among engineers than scientists. It seems to me that engineers are often much more outcome focused in their work, while e.g. physicist are much more tolerant towards research towards expanding knowledge which they also apply to other fields
> I do not share the distain for the "softer" sciences often found in the hard sciences
It depends on what the disdain is based on. If the disdain is just based on "your field isn't physics (or chemistry, etc.)", I agree that's not justified. But if the disdain is based on "your models don't make good predictions", I think it's justified. Scientific models are supposed to be judged based on whether or not they make good predictions; if they don't, they're wrong. Models that make good predictions are much, much rarer in the "soft" sciences--many such fields basically have no models with any predictive power at all, yet their proponents claim they are "sciences" and want their models to be used to, for example, make public policy.
I think part of the quantitative push comes from a shift in society.
Nowadays we have a meritocratic expectations and want to give equal opportunity to all, our cultural ideal of governance is democracy etc. It this framework a big thing to avoid and a big source of angst is that people will take these tax paid jobs and do jack shit and just sit around having fun. We want to avoid that, and replace any lazy non working people with people who will do it better.
For this we need some accountability. To see that papers get written and not only that, but they are also having an impact on other research (citations). What else can you do? You can have time sheets of working hours or computer software that tracks usage to see if someone just watches YouTube or reads the news at work... I mean, sure you could base it on common sense and talking with the researchers, but that's almost like asking them to write a report, but then why not publish the underlying work as well if it was good work?
Earlier however, science was more like a hobby and the background system was various forms of feudalism. Many scientists were rich noblemen who had the wealth to pursue their interests. If nothing came of it, they were still well fed and had nobody to report to. Or it was enough to have a patron who liked work enough that he fed the scientist. Or the it happened within the church with some theological justification/disguise, but that wasn't democratic either.
Since we now conceptualize power bottom up, from the masses to the intellectuals and not top down from the monarch/God to noblemen/intellectuals to the masses, we need a system to justify funding decisions to the people. And the easiest way is to point to papers and awards and experiments and machines etc.
In a way the old aristocratic system perhaps gave more freedom to a small number of scientists, today we have tons of scientists who all need to follow quite a rigid path.
I guess there was a time in-between, till the seventies or eighties maybe, when things were pseudo aristocratic with relatively few scientists with lots of freedom to do whatever the hell they please, including just reading newspapers or books all day long if they so decided.
I think our collective conscience is catching up and demands objective accountability to things are working in a just manner.
> For this we need some accountability. To see that papers get written and not only that, but they are also having an impact on other research (citations).
But papers and citations are the wrong metric for accountability. What we want from government funded science is models that make good predictions. That's what we need if the knowledge gained from government funded science is going to be useful to the government and the taxpayers. So that's the metric we should use to judge it. Number of papers written and number of citations are, at best, irrelevant to that metric, and at worst, actively opposed to it; it's much easier to generate papers and citations if they don't have to be based on models that actually make good predictions.
This is very frequently repeated, but I have never encountered any reason to believe it is true. Certainly, better science has been seen to win, in cases, but the amount of shit hurled first has always been enough to drive any sensible person to do something else less likely to attract hurled shit.
We can get a sense of how often correct theories are suppressed by looking at cases where a now-accepted theory was anticipated decades or even centuries earlier, but failed to gain traction at the time, not for lack of evidence, but simply because the evidence was unwelcome.
Theory of causation was understood a full century ago, but suppressed by the statistics mandarinate, and is only in the past 20 years starting to resurface. Evidence conclusively demonstrating human presence in the Americas before 15kya is still routinely dismissed. Requirement for surgeons to wash their hands took decades to catch on; to this day, more hospital ICUs have the traditional 4% incidence of fatal infection than the very gradually spreading 0%. The entire lack of any negative health impact from eating saturated fat was well demonstrated decades ago, but has only been accepted in this decade. These are just cases I know about; most have been better interred.
I am forced to conclude that there remain many more successfully-suppressed correct theories than accepted correct theories, and that things are getting worse, not better. I.e., correct theories still win sometimes, but the fraction of correct theories that win is decreasing in the face of tenured grant committees.
Spending the past 50 years not supporting any research to discover why meat consumption causes heart disease, via enforcing a saturated fat "consensus", is a failure of science.
Similarly in Alzheimer's research, still enforcing the failed amyloid "consensus".
"MBAs are running the show" is such a cliche. I see that more as simply a reality which is always constrained by resources.
Humanity can not afford to spend all their energy and resources for scientific research. We have many things we need for survival, and some for entertainment, etc. Obviously we do want to invest some amount of resources and human energy into research - but the question is how much is enough and which fields should we invest more or less ?
I claim that these questions are fundamentally unknowable / unsolvable. So we guess blindly (and very occasionally with a tiny bit of very myopic foresight), and mostly end up unwittingly relying on various historical accidents.
This is a very, very difficult question for humanity overall, and we're never going to get "right" amount of funding or "right" structure as there's no right answer. "MBAs" do and can hinder progress unnecessarily, but not always - they do exist for a reason and they represent society's collective attempt to avoid waste or runaway spending.
It would be better to entirely leave the term 'MBA' out of it, because it has nothing to do with the discussion.
Governments spending bazillions just want some measure of accountability and that's it.
Aside from the more existential issues you've noted ... there probably a host of things that could be improved.
One thing may be the recognition that a lot of science just isn't worth repeating at all. To simply 'dial back the faith' we have any in any single experiment, and assume that every first pass is fuzzy.
I don't think anyone is accusing anyone of fraud here, this is not a problem of 'a few bad apples' - it's more of a broad systematic issue of 'looking for positive results'.
Perhaps some opportunities lie in getting more behind null hypothesis stuff as being 'respectable'.
> "MBAs are running the show" is such a cliche. I see that more as simply a reality which is always constrained by resources.
I agree it is a cliche, but it is a very observable one over the last decades.
Regarding resource constrains. I argue it's just the other way around. If we were resource constrained we would not build huge administrative bodies to check that the scientists are actually doing what they are supposed to.
Take an example people here are likely more familiar with: in startups there are generally much less administrative processes, those come only in once companies get to a certain scale. Startups can simply not afford to pay lots of people to administer their "progress".
> Humanity can not afford to spend all their energy and resources for scientific research. We have many things we need for survival, and some for entertainment, etc. Obviously we do want to invest some amount of resources and human energy into research - but the question is how much is enough and which fields should we invest more or less ?
Considering the amount of energy we spend on things that are not needed and are actively hurting the planet and our survival, I would say we can't afford not to spend more on science.
> I claim that these questions are fundamentally unknowable / unsolvable. So we guess blindly (and very occasionally with a tiny bit of very myopic foresight), and mostly end up unwittingly relying on various historical accidents.
> This is a very, very difficult question for humanity overall, and we're never going to get "right" amount of funding or "right" structure as there's no right answer. "MBAs" do and can hinder progress unnecessarily, but not always - they do exist for a reason and they represent society's collective attempt to avoid waste or runaway spending.
MBAs are quite ill equipped for this tasks though. That's the my main gripe they try to apply a process that has problems even on the scale of a business (the problem of focusing on short term gains instead of long term sustainability) and try to apply it to society at large, even though there is no evidence that it would work.
I do not know what is your experience with science, but it is not how it works. In most cases research leads to unexpected results in grand scale. F.e. Research in string theory field led to developing new mathematics which eventually helped to calculate particle paths in LHS, which helped to make gathering data manageable and which lead to discovering Higgs boson.
>most socially unaware people (often science)
99% successful scientists are very socially aware. Usually socially unaware people do not become successful even in science.
>social artifacts can dominate their sphere
Because thats how world works. F.e. phd student does his own research, then he consults with his professor, who suggests how to add marketable layer to his research and with that layer his research becomes one for string theorists, though it was not his initial goal. This marketable layer helps him get the grant.
>Now, the existential issue of the fact that their 'entire worldview' and 'life-long work' is tied up in something, delusions can become very strongly internalized.
This is made by popular media, movies. Most scientists live normal life, create families, try separate work from life, therefore research does not come from worldview.
"Research in string theory field led to developing new mathematics which eventually helped to calculate particle paths in LHS,"
I think this is koolaid.
String Theory is Probably Wrong.
Thousands of PhDs have been minted, a zillion talks and publications, conferences, think tanks, books, 'Science PR' - all for wish-wash.
That 'String Theory helped some thing over in the corner' does not validate all the investment.
With the clearly lacking 'Self Awareness', it's possible that Science could have gone in a variety of directions, producing or necessitating all sorts of other, likely more useful things, out of which some 'useful math' may have also been developed.
But instead we put a lot of effort into vaporware.
Why? Because it's human nature - people don't want to accept they are riding a meme, and lack objectivity.
...
"This is made by popular media, movies. Most scientists live normal life, create families, try separate work from life, therefore research does not come from worldview. "
Again, I don't think you're maybe grasping what I am saying.
It's normative for people, especially serious people, to be wrapped up in a 'world view'.
Businessmen talk about 'free markets' all day long, often completely ignoring real externalities.
Socialist politicians, clouded by their goodwill, are often dangerously unaware of basic economics, the realities of Supply and Demand.
Military Generals may sometimes see a possible 'military solution to every geopolitical problem'.
It's even 'more normal' for someone working on something arcane and abstract to be tied up in their passion and ambition.
Case and Point: String Theory.
It seems as though String Theory is probably mostly hogwash.
How many String Theorists would admit that? Or even the very real possibility of it?
And how many will take their 'belief' to the grave?
It's obviously not 'entirely a waste' - but if Science wants to really maintain credibility, it needs a giant dose of self awareness there - and also with the 'reproducibility' problem.
Edit: for some nice examples of cultish, self re-affirming behaviour [1].
I would love to see a series of "20 years later" posts on a bunch of different fields. Where was the field at in 2000, where did they think they were going, where are they now? Related: what are the fields that did fade or have a major change where they may not even be around now.
You and me both. It's hard as a layman to keep track of what's happening in real-time. I love going back to things in ~1-decade intervals and seeing lists and articles etc showing what's happened "in-review".
Usually those kinds of overviews are limited to things like media genres, not fields of science. (eg best video games of the last 10 years, best movies and how they changed the industry etc). Something a bit more academic would be nice...
This is an interesting sub thread, a lot of the threads here seem to have the wrong idea about how funding really works. I've been involved in DARPA programs and EU programs and sat on national committees allocating research funding an I think that there is a much more progressive and critical stance than folks portray here. A good example is the Semantic Web movement - big excitement 20 years ago, big funding, big community... no results really....then no funding, community switches to other things real fast. There is a lot of griping about this in Comp Sci with people from the Semantic Web groups rightly saying that the funding bodies bought the hype and misallocated the funds on short term silly projects and that if just a fraction of the money had been provided in a structured and long term way a great deal more would have got done.
But what is really clear is that there is no real institutional effort to do what you suggest above - folks are not really tracking how well the funding system is working or what has worked and what hasn't. All of the reviews that I have seen are forward looking and driven by science insight, administrative change seems to be done by MBA fashion - or inertia.
1. 20 years ago the author wrote a paper asserting that string theory was a fruitless field of study.
2. The paper is written in a confrontational and snarky tone, for example: "The theory has been spectacularly successful on one front, that of public relations." Not a way to build bridges.
3. The author seems bitter that the paper wasn't accepted into Physics Today, and that the suggested rewriting into a letter to the editor did not result in publication either.
4. The author is sticking to their guns.
No matter how well-founded the idea that string theory is a dead end, this isn't likely to improve things.
The scientist saw BS, he called BS, and then 20 years later, the field he criticized turned out to be mostly BS.
If you're offended by the audacity of scientists criticizing pseudoscience, you may want to read about the concept of "polemics". It's an acceptable, longstanding form of rhetoric for when a respected figure publicly disagrees with a direction of another group of people.
Some things are un-falsifiable due to logic, some are un-falsifiable due to the limits of technology. String theory may just be the latter, but all un-falsifiable theories have the same problem.
I assume a publication like Physics Today gets a lot of submissions, and that probably the majority of them are rejected. Maybe there would've been a better chance of acceptance if the article (and presumably also the letter to the editor) wasn't so confrontational.
Physics today is not a scientific journal. I highly doubt that they get a lot of submissions. In my experience with similar publications in a different subfield it's typically more that the journalists/editors at those magazines think that some topic is interesting and talk to some of the more famous researchers in the field to write an article. Sometimes it's also a bit the other way around and the researchers suggest articles to editors they know. Not to publish a letter to the editor seems a very strange decision, these are very short letters and sometimes really cover some quite niche topics. Also not getting back to the author after asking him to rewrite in a different format is just bad form.
Is there a popular science book that begins with Noether’s theorem (along with some biographical details and historical context in which her theorem must be understood) and traces developments from there down to the Standard Model and talks about what the standard model doesn’t solve and why new theories are interesting?
Preferably a book that doesn’t go off into the deep end with alternate realities and dead/alive cats in boxes.
I read Higgs by Jim Baggot, it did a good job of providing an overview - however I would like something that goes into a little more detail while still being a popular science book.
I'd recommend learning the math/physics from a textbook. At some point, the pop-sci explanations get so garbled that they make the actual idea sound harder and more mysterious than it actually is.
When I first understood what entanglement was, mathematically, I was profoundly disappointed. Years of pop-sci had made it seem like a crazy thing, but mathematically, it's quite.. tame. (Some tensor products don't factorize).
> When I first understood what entanglement was, mathematically, I was profoundly disappointed. Years of pop-sci had made it seem like a crazy thing, but mathematically, it's quite.. tame. (Some tensor products don't factorize).
Is the _math_ quite tame, or is the phenomenon itself?
That's the part I understood that GP was asking about. I know the math is way over my head, but I'm interested in the concepts, to the extent that I can be. I love popular science that tries to explain that (really been into PBS Spacetime recently)
This is what I'm trying to combat, though! The math is not over your head. maximum, If you know high school math, then literally all one needs to start studying quantum computation is linear algebra --- you can study in a semester, as it is one of the first courses all engineering graduates learn, and is luckily quite intuitive. MIT OCW has Gilbert Strang's video lectures freely available (https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra...), along with exercises and transcripts.
That's the beauty of quantum computation; it harnesses all the intuitions we already have about computation, skips the messy parts of solving stuff in quantum mechanics while still keeping all the fiddly interesting bits that makes quantum quantum.
I really do believe that learning quantum computation is the "correct way" (for some folks) to gain an intuition for quantum mechanics
I don't understand how you could imagine having a discussion about "what the standard model doesn’t solve and why new theories are interesting," that wasn't in the language of quantum mechanics, which the dead/alive cats in boxes thing would be an elementary first step of learning. Field theory is way farther off the deep end than the measurement of a two-state system.
> I managed to learn QM without reference to cats in boxes,
This is because we teach students the "Shut up and calculate" formulation of Quantum Mechanics by default. Interpreting your mathematics doesn't build transistors or fund particle accelerators, for example.
It's still an important thing to consider but it doesn't have the same logical structure as day-to-day Quantum Mechanics
There are several languages with which you can discuss the standard model, particularly Group Theory, which I don't think require much familiarity with the details of QM. I assume this is what the GP is talking about given his mention of Noether currents.
I say this as someone who is quite familiar with QM, so it's very possible I'm just neglecting to consider all the implicit context I'm working with.
Group theory has applications in QM, but you can't go from the standard model to experimental results through group theory alone. To do that, you need to understand all of the physics concepts leading up to it. Even knowing what a conserved quantity is involves some of that implicit context you're talking about.
I don't agree that this is the case; it's possible to make predictions about physics over the SM in the language of group theory without understanding how they map to experimental results in physics.
I picked a random late episode, and I was pleasantly surprised to see him not shying away from discussing homotopy groups.
Reminds me that my brain literally turns to mush whenever topology and differential geometry comes into play - I can power through it, so far at least, but it doesn't seem to slot into my head as well as almost anything else I've come across.
It's really hard going for a non-physics person. I tried to read it end to end once and spent 9 months on it, I subsequently tried to read various interesting chapters out of order and didn't come away feeling that I had mastered the material or thinking. But it's all there, if you are able to crack it open.
Ultimately if you want a relatively clean PopSci book on that subject you're probably better off reading the Wikipedia page, then supplementing with the opinions of people like (say) Sean Carroll (Book or not).
There's only so much you can say about a topic, which is why you end up with the cats in boxes.
I've spent a lot of time thinking about this subject (in fact, I studied string theory at Columbia which is where Peter Woit is). The main issue that people have with string theory is the lack of testability / falsifiability. I think there's definitely some issues there, but I think blaming string theory is not quite right.
There are two reasons why string theory is tough to test. One of them, and the most obvious one, is that the energy scale where the strings exist is super high (likely at or near the Planck scale). This means that if we wanted to detect the direct signatures of strings, we'd have to create an experiment that is sensitive to these energy scales. This seems to be completely infeasible (people talk about particle colliders having to be the size of our solar system etc.) Now, interestingly this is not actually only a problem with string theory but rather with any theory of quantum gravity. This is because the fundamental scale of a theory is usually determined by combining its fundamental constants in some way as to produce an energy scale, and any theory of quantum gravity must somehow contain Planck's constant, Newton's constant of gravitation, and the speed of light. Taking these together gives you the Planck scale.
So, in other words if the reason you don't want to study string theory is that it only is testable at the Planck scale, then what you're really saying is that you don't think we should study any theory of quantum gravity. This is, I think, way to extreme of a position.
Now, interestingly, string theory is actually more than a theory of gravity, so unlike something like e.g. loop quantum gravity which is only a theory of gravity, it's conceivable that string theory somehow within it contains information that "trickles down" to lower energies and thus could potentially be testable at something like the LHC. This leads me to the second reason why string theory is very difficult to test.
It turns out that the equations of string theory are more-or-less unique at high energies but that as you start lowering the energy at which you probe the theory, multiple distinct solutions emerge. These solutions turn out to have a very nice physical interpretation: they are the different ways in which we can compactify the extra dimensions of string theory. Regardless of this physical interpretation, the fact remains that there are many many distinct solutions of string theory at low energies, and in order to make predictions that are falsifiable, we need to know which of these solutions we're living in. This is where the crux of the problem lies. It turns out that there are so many solutions of string theory that we cannot even in principle go through them one at a time to see if they're feasible (people throw around the number 10^500).
Now, it turns out that the real problem is not actually in the number of distinct solutions to string theory (~10^500), but rather in the way their structure is poorly understood. In fact, any theory of physics contains an infinite number of theories within it. For example, consider the mass of the electron as a free parameter. Until I tell you what the mass is, you can't make a complete prediction for what the energy levels in Hydrogen are. In fact, you could argue that since the mass of the electron is a real number, there are in fact an infinite number of predictions to the energy levels. A bit sarcastically you could then say that at least string theory has "only" 10^500 different theories within it, unlike traditional physics that has this continuous infinite set of theories.
The distinction between these two cases is then that for traditional theories, we can go the other way. If we measure the energy levels of Hydrogen, we can infer the mass of the electron. Then, knowing the mass of the electron, we can make other predictions. It's this last step that's currently missing in string theory. Currently we only know how to move in one direction: give me the solution you're talking about and I might be able to make predictions, but give me observational data and I can't work backwards to determine which solution I'm in. It's almost like a one-way hash.
I would say that this last objection is a roadblock that we're currently facing, and it's not perfectly clear that it's not solvable, nor is it clear that it is. I think that until we solve this problem, string theory will be stuck and people will be pointing fingers at the theory calling it a fool's errand. I personally think this criticism is misguided.
The upshot of this is that most people who work on string theory work in areas that are not plagued by this bifurcation to low energies. For example, you can use string theory to study the structure and behavior of black holes and holography, something called AdS/CFT, an area that has been incredibly successful.
I think your post is well thought out but you are hand waving a bit too much over string theory’s inability to reduce to known results. Reduction is an important cornerstone in physics and it’s a way to validate if a theory is on the right track even when things are not measurable.
I do not think it is correct to say that special relativity for example gives 10^500 possible classical theories. We know special relativity needs to produce same outcome as Newtonian mechanics or Maxwell’s equations in the limit as velocity tends to 0. In my simple example, we have 2 well established theories which bound the possible outcomes. This is all possible even without knowing the speed of light.
I agree that it's important to be able to reproduce existing theories. What I don't think is fair is to say that because we have not yet figured out a way to perform this reduction we should throw the theory out. There's a difference between a theory being untestable even in principle and being untestable because we have not yet understood the theory well enough.
I like the analogy of a hash function. Imagine that someone gave you the exact specification of a hash function (e.g. sha256) as well as the hash value of a list of inputs. The only thing missing from the story is the salt that was used in hashing the inputs. You're asked to make a prediction of what ought to happen when you hash the string "hello", but unless you know the salt you can't figure it out. So, you study all the examples provided and try to find collisions so that you can figure out what the salt is. The problem is that while it's easy to hash values, it's very hard to find collisions. It's really frustrating because in some sense you have all the information you need, but unless you're able to find vulnerabilities in sha256, you can't move forward. So, you spend a lot of time trying to understand what this hash function is really doing. Maybe some day you'll crack it at which point you'll be able to figure out the salt and ultimately make your prediction. However, until that day people around you keep telling you that you're being silly because your theory lacks predictive power. They say things like "your theory can predict anything you want it to, just pick your favorite salt and it'll output whatever you want!". It's not that the theory is necessarily wrong, it's that you don't fully understand it yet.
> There's a difference between a theory being untestable even in principle and being untestable because we have not yet understood the theory well enough.
If the theory is neither testable nor verifiable, then how do you (or anyone else) know you're even on the right track?
As someone who usually is rather critical of string theory I like your post a lot. I could come up with other criticism of string theory that your post does not cover, but you do present some very good points. That being said, I disagree with the following bit:
> For example, you can use string theory to study the structure and behavior of black holes and holography, something called AdS/CFT, an area that has been incredibly successful.
Here, "successful" only means that other (famous) string theorists have found those ideas worth pursuing. Then, other people, in turn, picked up the idea because the former people had praised it and then they, too, would receive praise. (It's almost like what people on Reddit call, pardon my French, a circle jerk.) However, AdS/CFT hasn't produced a single bit of verifiable experimental evidence in the realm of black holes. It's not even clear what AdS/CFT has to do with our (clearly non-AdS!) universe.
Yes, when I say that AdS/CFT is successful, I definitely am not saying that we have a falsifiable prediction for black holes, but I also don't mean that these are simply results that famous physicists like and promote, it's a bit deeper than that.
For many years it was conjectured that gravitation is actually most clearly understood as a two-dimensional theory (people call this the principle of holography). The rationale is that the entropy of a black hole is proportional to its area rather than its volume. Since a black hole is in a very specific sense the object of maximum entropy, that means that the amount of information that we can store in space is not proportional to its volume, but rather the area of the encompassing "sphere". This is super surprising since it suggests that what we perceive as 3d is really just an illusion and that the correct formulation of physics ought to be a 2d theory.
Now, for a long time we didn't have a way to actually write this sort of duality down. We sort of knew one side of the story (gravity with general relativity), but it fails in precisely the domains where we want to investigate it (black holes). String theory, whether you believe it's the true theory of everything or not, is nonetheless a mathematically consistent theory of quantum gravity (in a sense it's at least an existence proof that gravity can consistently be quantized). As such, it's at the very minimum a great arena to analyze these problems carefully. The first explicit construction of the duality between a 3d theory of quantum gravity and a 2d theory without gravity is precisely AdS/CFT. It says that a quantum theory of gravity is AdS space is mathematically and physically equivalent to a 2d theory of a conformal field theory that lives in 2d.
I don't think it's fair to say that AdS/CFT is being studied because famous people like it. It really does have a lot of value if nothing else than as a playground to understand how one could in principle formulate these dualities consistently.
I mostly agree with what you're saying but still, the entire "success story" of AdS/CFT is based on other assumptions about quantum gravity none of which has been tested experimentally. Even black hole entropy itself is, up until now, a purely theoretical construct.
> String theory […] is nonetheless a mathematically consistent theory
I hear this claim being perpetuated a lot but I have yet to see a string theorist give a mathematically rigorous introductory lecture on string theory. Don't get me wrong, I'm not saying there are no mathematically precise results in the realm of string theory but I know enough about functional analysis and the issues surrounding the mathematical underpinnings of quantum field theory (or even quantum mechanics) that I'm not buying your claim and my current view is that some parts of string theory are very rigorous, whereas (most) others are not. (AdS/CFT is one such part which belongs to the latter category.) I'd love to be proven wrong, though, so please feel free to send me papers etc.
Ok, when I say mathematically consistent I don't mean it in an axiomatic sense. In fact, like you point out there is a lot of unanswered questions even within quantum field theory whether or not it's mathematically well defined. In fact, one of the millennium prizes is related to this.
When I say mathematically consistent, I mean it in a looser sense. If we take a step back to before string theory, there was no way to get consistent results from quantum gravitational calculations. The usual tools that we use to renormalize quantum field theories do not work for gravity. This suggests that there's some type of "ultraviolet completion" of general relativity. In other words, the theory of GR ought to come with some implicit energy cutoff beyond which the theory somehow changes. String theory is such a change in that the stringy corrections to GR would only come into effect around the Planck scale. It's by no means necessarily the unique such completion, but as of now it's the only one we know of.
As to your other point, I think it's a good idea to reframe the work done in string theory as (what I used to joke) "theoretical theoretical physics". In other words, it may be the case that string theory is a true theory of nature, but even if it isn't, the theory lets us explore what such consistent theories could look like and how various paradoxes (like the information paradox in black holes) get resolved. These types of insights may point us in a direction of further investigation that may very well fall outside string theory.
In other words, at the very least (and I personally think this is underselling string theory by a lot) string theory is a proof of concept and a very powerful, sophisticated, and rich arena in which we can begin to understand the salient features of quantum gravity. One such example is AdS/CFT.
It's producing results in strongly coupled field theories. You take strong coupled physics, translate it to AdS, solve it and translate it back. Then you get answers for problems like the dynamics of quark-gluon plasma.
Given that there have been exactly zero experiments conducted in laboratories close to black holes, isn't that qualifier kind of a cop-out? Our entire observational knowledge of black holes consists of radiation emitted from the accretion disk (deep in the classical regime), orbits of nearby objects (even farther in the classical regime), and one blurry radio picture (predicted by GR and containing nothing, yet, to our knowledge, outside of GR's predictions.) I guess you could count gravitational waves, but guess what, that's GR too... ;) Given the observational knowledge of today, even a true theory of quantum gravity beamed back from the future would fail to produce verifiable experimental evidence in the realm of black holes.
> Given that there have been exactly zero experiments conducted in laboratories close to black holes, isn't that qualifier kind of a cop-out?
No, I don't think it is. I agree of course that it's hard to test predictions regarding black holes. But I was addressing OP's statement that
> For example, you can use string theory to study the structure and behavior of black holes and holography, something called AdS/CFT, an area that has been incredibly successful.
and I merely intended to express my disagreement with the claim that AdS/CFT has been "incredibly successful". Successful in what way? Measured by what metric? Clearly not by the usual metric that says that experimental evidence is what counts. And on top of all of that AdS isn't even close to the spacetime describing the known universe.
> Given the observational knowledge of today, even a true theory of quantum gravity beamed back from the future would fail to produce verifiable experimental evidence in the realm of black holes.
> These solutions turn out to have a very nice physical interpretation: they are the different ways in which we can compactify the extra dimensions of string theory. Regardless of this physical interpretation, the fact remains that there are many many distinct solutions of string theory at low energies, and in order to make predictions that are falsifiable, we need to know which of these solutions we're living in.
You have created a mathematical structure that is so flexible that it can fit any data. It is useless. You have abandoned the hypothetico-deductive model.
String theory is moreso considered a framework that may lead to physics than a physical theory. Physicists are interested in it because of its potential to lead to physics. Of course, both of these statements are contained in the original comment. That's why, as the original comment suggests, the absolute most important question about string theory is, "which string theory leads to our universe at low energies?"
Why isn't it possible to go through some kind of a search tree and prune the solution set?
I guess to parametrize the theory it takes something other than masses of elementary particles and a few constants?
If I had that big of a solution set I'd definitely try to search through it. If one string theory solution can give me a predict() function that I can verify, I'll find a way to parametrize it (deep neural networks, graph data structures) and then check if predict() matches the predictions we know.
That would be nice, the problem is that nobody has a good way of doing that. Like the analogy I gave elsewhere, it's sort of like a hash function: doable to compute in one direction but seemingly impossible to invert. If that's the case it's be very difficult to create such a tree to prune because there's no way to categorize the nodes. In other words, the depth of the tree would be 1 and the breadth would be ~10^500.
Interesting, the first thing that I assumed was that it's easy to compare one solution to another and get a nice finegrained comparison. For example, a set of solutions with some parameter is always inferior to another set with a better parametrization.
I'm confused on your last point. If indeed string theory is wrong, doesn't that invalidate everything we've learned about black holes and holography via AdS/CFT too?
So, take as a silly example a regular pendulum of mass m and length l swinging on earth where the acceleration due to gravity is g.
Suppose we want to figure out what the period of the pendulum is (i.e. how long it takes for it to make one full swing from left to right back to left).
We could go ahead and solve this by using Newton's laws, but at the end of the day whatever formula we find should have within it the parameters mentioned above (as well as regular numbers like 2s, pis, etc.). If we take a look at the units of the parameters above, we see:
mass of pendulum (m): kg
length of pendulum (l): m
acceleration due to gravity (g): m/s^2
Out of these we want to produce something with units of time, and it turns out that there's a unique way of combining the above to get that: 1/g has units of s^2/m, so l/g will have units of s^2. Just take the square root and you find sqrt(l/g).
Now, this doesn't necessarily mean that the period of the pendulum is exactly sqrt(l/g), it just means that it needs to be proportional to this. In fact the formula for the period has an extra factor of 2pi in it: 2pi sqrt(l/g). The point is that the general magnitude you'd expect for any timescale associated with the pendulum would be roughly equal to sqrt(l/g).
For quantum gravity the story is the same. We have some parameters that we expect should be deeply part of any prediction: Planck's constant, the speed of light, Newton's constant of gravitation. The units are as follows:
Planck's constant (hbar): m^2 kg / s
Speed of light (c): m/s
Gravity (G): m^3 / kg s^2
From these we want to figure out what the relevant energy scale is (i.e. when will this theory start differing from a non quantum mechanical version of gravity). Well, there's only one way to combine these into an expression that has the dimensions of energy. This particular way of combining the constants is huge and is traditionally called the Planck scale:
E_Pl ~ sqrt(hbar c^5 / G) ~ 10^19 GeV
In contrast, the LHC is currently probing physics on the scale of ~ 10^4 GeV so we're not even close to probing things in the realm of quantum gravity.
Note that this analysis has nothing to do with string theory at all and generalizes to any theory of quantum gravity.
I wonder if a similar moment of reckoning will ever come for modern macroeconomics. I remember working on differential equations on the economy one night and having the sudden insight that the data was far too skimpy and uncontrolled to be at all pretending to do science. and yet there I was going through the motions of doing math on the economy.
ditto, of course, goes to holding accountable the economic beliefs of politicians and talking heads of all persuasion. the people conscious of how much we don't know get drowned out by the people who don't care.
It may be true that, "Lubos Motl is still arguing that string theory is the language in which God wrote the universe"; but it has been several years since I last encountered any text he wrote. So, that seems like progress, of sorts.
I gather that Alzheimer's research is still almost exclusively about amyloid plaques and tau tangles, despite absolutely negative results and disastrous drug trials for decades.
And, of course, magnetic confinement fusion is still 20 - 30 years off, and always will be.
Incidentally I've just read the book "Lost in Math" by Sabine Hossenfelder, and it really goes down into the problems of modern science, its funding, its "publish or perish", its fashions... A really excellent (and quite funny, too) book.
There are lots of examples in physics of math solutions leading to discoveries. On the plus side string theory is a tool that makes it possible to derive conjectures. One day we will find a testable prediction..
> Almost exactly twenty years ago I started writing a short article about the problems with string theory. [...] The piece was done in a week or two, after which I sent it around to a group of physicists to ask for comments. The reaction was mostly positive, although at least one well-known theorist told me that publicly challenging string theorists in this way would be counter-productive.
I like to know the subject before I wander off HN following a nondescript title...
I almost certainly would have never clicked on an article about String Theory, even if it was trending at #1, but because of the vague title I did and was happily surprised to learn something about the world of physics and physics researchers.
The title wasn't written for you, but that doesn't make it clickbait. That's true even though it's not a great title.
The author is a physicist/computer scientist writing on his personal blog. He is writing about some of his own work, 20 years later, and his likely audience probably has context about that work, or will be linked from somewhere that provides that context. The fact that you and I stumbled upon it from a decontextualized aggregator is a bug.
Peter Woit(the author) is a lecturer in the department of mathematics at Columbia University. He is kind of a black sheep in the high-energy physics world. He has a PhD in physics from Harvard, so he certainly has the credentials but his academic career stalled/never took off, so after some post-docs he took a job at Columbia teaching mathematics and maintaining the computer network of the department. A fact that has not passed unnoticed by his "enemies"(many string theorists),so they use that to mock him and dismiss his not entirely unreasonable criticism of ST. That shows you that eminent physicists can be as tribalistic, petty and emotional as a stereotypical 14 year old girl.
No, clickbaits are usually disallowed in the title. The rule is use original title, most of the time it's indeed an anti-clickbait rule (although some false negatives are unavoidable).
> Please don't do things to make titles stand out, like using uppercase or exclamation points, or saying how great an article is. It's implicit in submitting something that you think it's important.
> [...] Please use the original title, unless it is misleading or linkbait; don't editorialize.
My personal observation, there are three types of bad titles.
1. Clickbait titles editorialized by a submitter to make everyone know how great/important the article is.
2. Clickbait titles from the source.
3. Non-descriptive title from the source.
60% of the bad titles are Type-1 title from the submitters (as everyone who ever browsed Reddit would know). To prevent such behaviors, the "use original title" rule is implemented on Hacker News.
30% of the titles are Type-2 title. On Hacker News, if many readers loudly criticize that the original title is misleading, eventually a moderator will correct it manually.
10% of the titles are Type-3 title, non-descriptive title that contain little information, but it's neither misleading or clickbait. On Hacker News, usually, nothing happens. This article is one of them. Also, any attempt to change the title could be seen as Type-1 title editorialization, so the original title remains. So sometimes, even after the title has been changed by the submitter to be more descriptive, it could be reverted back to the original, non-descriptive one.
> This site doesn't allow comments without substance, but for some reason link titles don't have the same standards.
Whenever people compliant that Hacker News allows non-descriptive titles, please consider the fact that "use the original title" does more good than harm in general with a false negative rate of only 10%, which is a small price to pay.
I like String Theory. It's a beautiful theory that's striking all the right chords within me.
Lemme paraphrase the great Arthur C. Clarke about this. We are fish in water. And some time fishes do jump out of the water. And sometime fishes, while jumping out of the water are seeing fires on land.
That's String Theory for me - it's a fire on land that we get a glimpse of it and can't quite make what it is. And until we will not walk on land and make fire ourselves we'll never fully understand its potential. 20 years is nothing in this regard.
IIRC, one big problem of the strings theory was the shape of those strings: the theory allows many, but our universe uses one and we don't know which one.
I think there's a good candidate for this shape. Many books, rather old, but completely inadmissible in the court of modern science, describe the smallest particle in a way that curiously resembles a string. The string forms a coil, very similar to a wire coil in electromagnets, that revolves almost 3 times around the central axis and then returns back to the entrance. The outer shape is an oblate spheroid and the entire thing is dynamically stable. The coil itself carries electricity, and because of its shape, it produces a magnetic field that streams thru the center. The particle performs rapid gyrations around the central axis - that's probably the same effect as the Lamour precession. There are two types of this particle: one with clockwise direction of the coil and the other with counterclockwise. The string enters and exits the area in a direction perpendicular to all 3 dimensions, so it kind of appears from nowhere. The string itself is made of a finer coil, and that of even finer coil and so on a few times. If the 10 dimensions guess is correct, then there are probably 6 or 7 such finer coils. If the entire thing is unrolled, it would form a circle of very large radius. While such a particle, that probably corresponds to the electron, has enormously complex internal structure, it's still a fundamental particle of 3d world, because cutting the string in any place makes it recombine into numerous tiny coiled strings, but all of them freely float along the 4th dimension and barely interact with the 3d particles, so they become kind of background noise of unknown origin. The magnetic stream that flows thru the central axis of each 3d particle is made of these tiny 4d particles, and those probably replicate the entire arrangement at a smaller scale, so there probably are even smaller 5d coiled strings and so on.
Like I've said, this kind of description is inadmissible in science, but it's ok to "guess" it and invent a plausible explanation that this shape minimizes some weird functional in a hilbert space of the strings theory.
Yeah... um... "many books, rather old, but completely inadmissible in the court of modern science" seems like an improbable place to expect to have any accurate insight into the actual fundamental nature of fundamental particles. Even if they contain something that, if you squint hard enough, looks like a string, that's no reason to suppose that reality actually looks like that. (But I suspect that you're inclined to accept those books for other reasons, and therefore to accept them as also accurate about physics. I suppose a starting point is where you find it, but you should forgive the rest of us when we remain unconvinced that it is likely to be a useful starting point.)