>Using user generated data to train an AI is no different than scanning it for spam
That's definitely not true.
Under some circumstances LLMs can spit out large chunks of the original content verbatim. Meaning this can actively leak the contents of a confidential discussion out into a completely different context, a risk that does not exist with spam scanning.
That's definitely not true.
Under some circumstances LLMs can spit out large chunks of the original content verbatim. Meaning this can actively leak the contents of a confidential discussion out into a completely different context, a risk that does not exist with spam scanning.