We have data standards and agreements with those companies, we pay them to have expectations. Even then, we're strict about what touches vendor servers and it's audited and monitored. Accounts are managed by us and tied into onboarding and offboarding. If they have a security incident, they notify, there's response and remediation.
ChatGPT seems to be used more like a fast stackoverflow, except people aren't thinking of it like a forum where others will see their question so they aren't as cautious. We're just waiting for some company's data to show up remixed into an answer for someone else and then plastered all over the internet for the infosec lulz of the week.
> We have data standards and agreements with those companies, we pay them to have expectations. Even then, we're strict about what touches vendor servers and it's audited and monitored. Accounts are managed by us and tied into onboarding and offboarding.
For every company like yours there are hundreds that don't. People use free gmail address for sensitive company stuff, paste random things in random pastebins, put their private keys in public repos, etc.
Yes, data leaks from OpenAI are bound to happen (again), and they should beef up their security practices.
But thinking people are using only ChatGPT in an insecure way vastly overestimates their security practices elsewhere.
The solution is education, not avoiding new tools.
ChatGPT seems to be used more like a fast stackoverflow, except people aren't thinking of it like a forum where others will see their question so they aren't as cautious. We're just waiting for some company's data to show up remixed into an answer for someone else and then plastered all over the internet for the infosec lulz of the week.