Not clear how to deal with this either - you can improve authentication but it won’t prevent the properly auth’ed users from running LLMs. You can watermark the output of officially vended LLMs (Scott Aaronson seems to be working on that) but nothing is gonna prevent people from running non-watermarked versions
Its basically too late, as soon as one rich person decides to train a model and dump it, it's game over. I feel we might actually be experiencing the last months of an internet where you can expect to be talking to a real human on the internet.
Every year the cost of training these models drops so they won't be out of reach for long.