I wouldn't build a whole application around OpenAI either, but I could imagine that OAI could be leveraged for single features, maybe where the user doesn't even know it's AI(). I could imagine content moderation, converting between file formats, or reading sentiment from user tweets about a company.
Maybe the huge value of OpenAI isn't lying in generating new content, but cutting costs of existing abstraction tasks.
So, no content generation, as that would be unethical without disclosure of AI usage.
>I could imagine that OAI could be leveraged for single features
That's a possible application which won't be easily copied by the large company. But the value of such features is quite limited.
> I could imagine content moderation, or reading sentiment from user tweets about a company.
I am pretty sure that these services will be provided soon by large companies and OpenAI. They are generic enough to have wide market, and many of these companies already have expertise in them.
>So, no content generation, as that would be unethical without disclosure of AI usage.
Why unethical? I think these systems could create real content which is useful to everyone as well as spam content could be generated by humans.
> I am pretty sure that these services will be provided soon by large companies and OpenAI. They are generic enough to have wide market, and many of these companies already have expertise in them.
Well, content moderation on your already existing venue (internet board, discord or whatever) can't be taken away, really. Is a sentiment indicator would be implemented into e.g. TradingView.com, it couldn't be copied either, because most subscribers are in for the main application. It would just be a small feature to justify upselling or whatever.
> Why unethical?
I'm specifically talking about content generation without disclosure. Imo it's fine to generate content as long as you clarify it for the reader that they are reading a text generated by AI. This is also in line with OpenAI's terms of service: "The role of AI in formulating the content is clearly disclosed in a way that no reader could possibly miss, and that a typical reader would find sufficiently easy to understand."
I believe that it's a misguided policy. LLMs are becoming a part of us like mobile phones, internet, google, messenger and multitude of other thinks before this. We should embrace this.
I wouldn't build a whole application around OpenAI either, but I could imagine that OAI could be leveraged for single features, maybe where the user doesn't even know it's AI(). I could imagine content moderation, converting between file formats, or reading sentiment from user tweets about a company.
Maybe the huge value of OpenAI isn't lying in generating new content, but cutting costs of existing abstraction tasks.
So, no content generation, as that would be unethical without disclosure of AI usage.