I'd use a local LLM too to make sure the original prompt does not leak and can't be connected to the published output.
I'd use a local LLM too to make sure the original prompt does not leak and can't be connected to the published output.