> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.
While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.
For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.
I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?
> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.
You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.
The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.
The benefit of adoption is education. The world is already adapting.
Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.
I think the stance that regulation slows innovation and adoption, and that unregulated adoption yields public understanding is exceedingly naive, especially for technically sophisticated products.
Imagine if, e.g. drugs testing and manufacture was subject to no regulations. As a consumer, if you can be aware that some chemicals are very powerful and useful, but you can't be sure that any specific product has the chemicals it says it has, that it was produced in a way that ensures a consistent product, or that it was tested for safety, or what the evidence is that it's effective against a particular condition. Even if wide adoption of drugs from a range of producers occurs, does the public really understand what they're taking, and whether it's safe? Should the burden be on them to vet every medication on the market? Or is appropriate to have some regulation to ensure medications have have their active ingredients in the amounts stated, and are produced with high quality assurance, and are actually shown to be effective? Oh, no, says a pharma industry PR person. "Doing anything that limits the adoption or encourages the underground development of bioactive chemicals is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for."
If a team of PhDs can spend weeks trying to explain "why did the model do Y in response to X?" or figure out "can we stop it from doing Z?", expecting "wide adoption" to force "public education" to be sufficient to defuse all harms such that no regulation whatsoever is necessary is ... beyond optimistic.
Regulation does slow innovation, but is often needed because those innovating will not account for externalities. This is why we have the Clean Air and Water Act.
The debate is really about how much and what type of regulation. It is of strategic importance that we do not let bad actors get the upper hand, but we also know that bad actors will rarely follow any of this regulation anyway. There is something to be said for regulating the application rather than the technology, as well as for realizing that large corporations have historically used regulatory capture to increase their moat.
Given it seems quite unlikely we will be able to stop prompt injections, what are we to do?
Provenance seems like a good option, but difficult to implement. It allows us to track who created what, so when someone does something bad, we can find and punish them.
There are analogies to be made with the Bill of Rights and gun laws. Gun analogy seem interesting because they have to be registered, but often criminals won't and the debate is quite polarized.
With the pharma example, what if we as a society circumvented the issue by not having closed source medicine? If the means to produce aspirin, including ingredients, methodology, QA, etc, were publicly available, what would that look like?
I met some biohackers at defcon that took this perspective, a sort of "open source but for medicine" ideology. I see the dangers of a massively uneducated population trying to 3d print aspirin poisoning themselves, but they already do that with horse paste so I'm not sure it's a new issue.
My argument isn't that regulation in general is bad. I'm an advocate of greater regulation in medicine, drugs in particular. But the cost of public exposure to potentially dangerous unregulated drugs is a bit different than trying to regulate or create a restrictive system around the development and deployment of AI.
AI is a very different problem space. With AI, even the big models easily fit on a micro SD card. You can carry around all of GPT4 and its supporting code on a thumb drive. You can transfer it wirelessly in under 5 minutes. It's quite different than drugs or conventional weapons or most other things from a practicality perspective when you really think about enforcing developmental regulation.
Also consider that criminals and other bad actors don't care about laws. The RIAA and MPAA have tried hard for 20+ years to stop piracy and the DMCA and other laws have been built to support that, yet anyone reading this can easily download the latest blockbuster movie or in the theater.
Even still, I'm not saying don't make laws or regulations on AI. I'm just saying we need to carefully consider what we're really trying to protect or prevent.
Also, I certainly believe that in this case, the widespread public adoption of AI tech has already driven education and adaptation that could not have been achieved otherwise. My mom understands that those pictures of Trump being chased by the cops are fake. Why? Because Stable Diffusion is on my home computer so I can make them too. I think this needs to continue.
> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?
i can sell a webserver that gets used to host illegal content all day long. Should that be included? Where does the regulation end? I hate that the answer to any question seems to be just add more government.
Just because there's a conversation about adding more government doesn't mean people are seeking a totalitarian police state. Seems quite the opposite for many of these commenters supporting regulation in fact.
Similarly it's not really good faith to assume everyone opposed to regulation in this field is seeking a lawless libertarian (or anarchist perhaps) utopia.
> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?
There are already laws against false advertising, misrepresentation etc. We don’t need extra laws specifically for AI that doesn’t perform well.
What most people are concerned about is AI that performs too well.
I would assert that just as I have the right to pull out a sheet of paper and write the most vile, libelous thing on it I can imagine, I have the right to use AI to put anyone's face on any body, naked or not. The crime comes from using it for fraud. Take gasoline for another example. Gasoline is powerful stuff. You can use it to immolate yourself or burn down your neighbor's house. You can make Molitov cocktails and throw them at nuns. But we don't ban it, or saturate it with fire retardants, because it has a ton of other utility, and we can just make those outlying things illegal. Besides, five years from now, nobody's going to believe a damned thing they watch, listen to, or read.
I have the right to use my camera to film adult content. I do not have the right to open a theater which shows porn to any minor who pays for a ticket. It's perfectly legal for me to buy a gallon of gasoline, and bunch of finely powdered lead, and put them into the same container, creating gasoline with lead content. It is _not_ fine for me to run a filling station which sells leaded gasoline to motorists. You want to drink unpasteurized milk fresh from your cow? Cool. You want to sell unpasteurized milk to the public? Shakier ground.
I think you should continue to have the right to use whatever program to generate whatever video clip you like on your computer. That is a distinct matter from whether a commercially available video generative AI service has some obligations to guard against abusive uses. Personal freedoms are not the same as corporate freedom from regulatory burdens, no matter how hard some people will work to conflate them.
I think by "widespread use" he means the reach of the AI System. Dangerous analogy but just to get the idea across: In the same way there is higher tax rates to higher incomes, you should increase regulations in relation to how many people could be potentially affected by the AI system. E.G a Startup with 10 daily users should not be in the same regulation bracket as google. If google deploys an AI it will reach Billions of people compared to 10. This would require a certain level of transparency from companies to get something like an "AI License type" which is pretty reasonable given the dangers of AI (the pragmatic ones not the DOOMsday ones)
But the "reach" is _not_ just a function of how many users the company has, it's also what they do with it. If you have only one user who generates convincing misinformation that they share on social media, the reach may be large even if your user-base is tiny. Or your new voice-cloning model is used by a single user to make a large volume of fake hostage proof-of-life recordings. The problem, and the reason for guardrails (whether regulatory or otherwise), is that you don't know what your users will do with your new tech, even if there's only a small number of them.
I think this gets at what I meant by "widespread use" - if the results of the AI are being put out into the world (outside of, say, a white paper), that's something that should be subject to scrutiny, even if only one person is using the AI to generate those results.
I agree with you. I that's an excellent and specific proposal for how AI could be regulated. I think you should share this with your senators/representatives.
> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.
As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.
I like jobs too but what about the risks of AI? Some people I respect a lot are arguing - convincingly in my opinion - that this tech might just end human civilization. Should we roll the die on this?
While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.
For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.
I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?