Of those, the quantum superposition is the only one that has a chance at being considered objective, and it's still only "objective" in the sense that (as far as we know) your description provided as much information as anyone can possibly have about it, so nobody can have a more-informed opinion and all subjects agree.
The others are both partial-information problems which are very sensitive to knowing certain hidden-state information. Your random number generator gives you a number that you didn't expect, and for which a formula describes your best guess based on available incomplete information, but the computer program that generated knew which one to choose and it would not have picked any other. Anyone who knew the hidden state of the RNG would also have assigned a different probability to that number being chosen.
You might have some probability distribution in your head for what will come out of GPT-2 on your machine at a certain time, based on your knowledge of the random seed. But that is not the GPT-2 probability distribution, which is objectively defined by model weights that you can download, and which does not correspond to anyone’s beliefs.
I'm of the view that strictly speaking, even a fair die doesn't have a probability distribution until you throw it. It just so happens that, unless you know almost every detail about the throw, the best you can usually do is uniform.
So I would say the same of GPT-2. It's not a random variable unless you query it. But unless you know unreasonably many details, the best you can do to predict the query is the distribution that you would call "objective."
I think this gets into unanswerable metaphysical questions about when we can say mathematical objects, propositions, etc. really exist.
But I think if we take the view that it's not a random variable until we query it, that makes it awkward to talk about how GPT-2 (and similar models) is trained. No one ever draws samples from the model during training, but the whole justification for the cross-entropy-minimizing training procedure is based on thinking about the model as a random variable.
A more plausible way to argue for objectiveness is to say that some probability distributions are objectively more rational than others given the same information. E.g. when seeing a symmetrical die it would be irrational to give 5 a higher probability than the others. Or it seems irrational to believe that the sun will explode tomorrow.
The others are both partial-information problems which are very sensitive to knowing certain hidden-state information. Your random number generator gives you a number that you didn't expect, and for which a formula describes your best guess based on available incomplete information, but the computer program that generated knew which one to choose and it would not have picked any other. Anyone who knew the hidden state of the RNG would also have assigned a different probability to that number being chosen.