I think that ChatGPT could be a big accelerator for creative activity, but I wouldn't trust any output that I've only partially verified. That limits it to human scale problems in its direct output, but there are many ways that human scale output like code snippets can be useful on computer scale data.
For simple things it's pretty safe. I tried pasting in HTML from the homepage of Hacker News and having it turn that into a list of JSON objects each with the title, submitter, number of upvotes and number of comments.
There's two classes of response: those that are factually "right" or "wrong" -- who was the sixteenth president of the U.S.? And those that are opinion/debatable: "How should I break up with my boyfriend?" People will focus on the facts, and those will be improved (viz: the melding of ChatGPT with Wolfram Alpha) but the opinion answers are going to be more readily acceptable (and harder to optimize?).