You don't have to go that far. Depending on the topic, it's already very hard to use Gemini/ChatGPT in a corporate setting. Think: You're doing an FAQ for a child safety crisis/PA slide deck.
It's funny how sometimes the quality of associate work drops as the topics make text generators less useful.
The way forward for corporations is almost certainly either for very specific use cases or local/internal LLMs. Producers of these will probably be less afraid of being canceled by populists, hence introduce less censorship.
Do you actually talk to enterprise users? Nobody I’ve spoken to has ever once complained about censorship. Everyone is far more worried about data governance and not getting sued. Maybe it’s just that zero people I talk to are making child safety crisis slide decks? Seems like an unusual use case for most businesses.
Your reference to data security is very valid, but why is that comment so provocative and upset?
> Seems like an unusual use case for most businesses.
You may not work in a public affairs consultancy. Companies do different things. Well drilling is also "an unusual use case for most companies". That does not make it any less important.
If your tool tries to fiddle with the content of your statement, it's not a serious tool. No one would accept their spell check tool to have an opinion on the type of content it is correcting.
Perhaps the HR support line for OpenAI developers tasked with implementing the censorship system?