Say more about this! Let's say you had $100B to build better climate models over the next 5 years. Where would you spend it?
My general sense is the models are good enough to tell us that a) things are headed in the wrong direction and b) we're really far off from a solution. Generally in agreement with you but I haven't researched deeply about what's needed on more modeling and how to prioritize.
You need at least two completely independent teams who don't see each other's code so you'll have a rough idea how much developer bias is responsible for the signal. Right now they are all copying code from each other, a single error in these and all the climate models are effected. Replication here should mean different scientists writing different code arriving at the same estimates.
It needs to be open source, and it needs to be way more organized, no physical constants which aren't constant or which have bad precision or which have different values in different parts of the code.
Actual error modeling built in or at least a way to measure and quantify the accuracy.
Some models only use CO2 input for radiation and not for many other parameters that depend on CO2. Because the code is a mess and constants that shouldn't be constants are all over the place. And getting "constants" right is even more important than having high resolution to the simulation.
Getting real physical data to replace all the ad hoc constants is extremely important. And validating the correctness of the cloud modeling with real world data is also important.
And you can't write huge spaghetti code, it needs to be readable and comprehensible to someone other than original programmer so that people can validate the assumptions.
I think you need an expert team of software engineers, physicists and climate experts that will either build good climate models or decipher the mess that is the current models variability.
Honestly, looking at the code of some models I don't think there's much hope in understanding them to the point where you know where the deviations are coming from, it's probably easier putting together new models that are subject to much more strict testing, unit testing, evaluations against real world data than salvaging the current mess. And just throw out the models which fail the testing.
My general sense is the models are good enough to tell us that a) things are headed in the wrong direction and b) we're really far off from a solution. Generally in agreement with you but I haven't researched deeply about what's needed on more modeling and how to prioritize.