I'm sure that's very important in principle, much less sure that it matters in practice. Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Maybe others can complete it, maybe it'll be easy to complete it in twenty years, with a little more hindsight. Maybe.
> Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Ok, but that's more on you than on current AI; the models which get distributed (both LLMs and Stable Diffusion based image generators) are already found in re-trained and specialised derivatives created by people who know how to and have a sufficiently powerful graphics card.
Which is a kind of workaround for the inability to learn after the end of training… It's not clear to me how much this workaround mitigates the inability to learn after training. Is it clear to you? If so, please feel free to post a wall of text ;)
So far as I know, all current AI need far more examples than we do.
But, that's not why LLMs are "unable" to learn: the part which does that is simply not included in when it's deployed for inference.