That sounds a little like parameter fitting. But maybe that's ignorantly harsh. The fantasy of simple being beautiful and so more likely "true" (which itself is an iffy concept).
Anyway, isn't Λ basically a constant term in the gravity equation? So then you argue that Λ isn't constant. Maybe it depends on time. Or on distance, which I guess just makes it a polynomial. Something like that?
You're exactly right. We have a model (Lambda CDM + GR + a few other details). We have ways to generate descriptions of how the universe would look given certain parameters (H0=70, Omega_Lambda=0.73, etc, etc) and we basically just see what range of parameters gives a universe that looks like (quantified using some statistics) the one we see through our telescopes.
But this is just phenomenology. The next step is working out the physics. For example, let's say we know there is X amount of something that looks like a cosmological constant - but what is that. This is what e.g. the search the dark matter particle is about - we know there is something that is cold + collisionless but what particle is it.
> So then you argue that Λ isn't constant. Maybe it depends on time. Or on distance, which I guess just makes it a polynomial. Something like that?
Yup, I'm pretty sure the only models we have tested are wCMD which allows w (the equation of state of DE) to be something other than -1 (which is what the cosmological constant is). There is also w(a) which parameterizes the equation of state of dark energy as a linear function of scale factor (just think of it as time, a=1 now a=0 at the big bang). So linear rather than constant. We haven't gone to higher order than that.
The downside to adding parameters though is that, while you can always fit your data better (or at least as well) with more parameters,
1: Your error bars often blow up
2: Getting from phenomenology to physics might become hard. There are some models people have proposed that might allow us to fit the data, but then you need to explain why w changed in a very particular way at a very particular time. Basically it starts to look a little like overfitting.
Overfitting a model for global climate change, for example, isn't an issue, because you're not interested in something like physics. I mean, it's based on physics, but that's buried way down in the model.
But physics has different goals. Closer to math, I guess.
It's actually very easy to overfit climate models. They are fit to observed data with statistical inverse problem techniques (the same as I imagine they do with astronomical data). Climate change models are just directly discretized physical equations. Just like astronomy, the decisions are made on what physics are represented in the model and what are parameterized.
> Climate change models are just directly discretized physical equations.
I'm no expert, but it's my understanding that they're hugely more complicated than that implies. Sure, there's lots of physics there. But also chemistry and biology. The best ones are general circulation models,[0] and the outcomes will never fit some pretty theoretical structure.
> Overfitting a model for global climate change, for example, isn't an issue, because you're not interested in something like physics. I mean, it's based on physics, but that's buried way down in the model.
It's not so much that as that the goals are different. We want to understand cosmology for its own sake. We want to understand climate change because that knowledge drives policy. For that purpose, it doesn't really matter that we're unable to predict the exact weather in Denver at 11:23 AM on October 27, 2091. What matters is that we are able to predict in broad brushstrokes that the consequences of business as usual will probably be bad, and so we ought to seriously consider doing something about it. There is no conceivable outcome of cosmological modeling that will drive policy changes like that.
I agree that models for the overall development of the universe are a lot like models for global climate change. The scale is vastly different, of course. But I bet that the relative cell sizes in our models are similar. Because they're running on similar machines.
But the goals for a theory of gravity, and its integration with QM, are totally different. Or at least, that's my perhaps naive opinion.
Edit: That is, relative cell sizes and total cell counts.
That sounds a little like parameter fitting. But maybe that's ignorantly harsh. The fantasy of simple being beautiful and so more likely "true" (which itself is an iffy concept).
Anyway, isn't Λ basically a constant term in the gravity equation? So then you argue that Λ isn't constant. Maybe it depends on time. Or on distance, which I guess just makes it a polynomial. Something like that?