>> "... ensure legal, ethical and scientific criteria are met."
Please... We are talking about stochastic models. This means that we are in the domain of Math, not the domain of Philosophy, and not Law either.
Evaluating a stochastic model, even a multivariate model, involves only two dimensions. Even if it is running on 10k GPUs. Even if it has been trained on billions of data points.
The two dimensions are:
1) Reliability
2) Validity
...and that is all. It is Statistics 101. Not only that, it is the most fundamental part of Statistics 101.
I'm sorry, but WHAT? You're talking about a software model people regularly talk about using as a replacement for google which is capable of telling you to kill yourself. Of course there are moral considerations for such a software system. Such a system could also obviously be useful as a lawyers assistant, which is then clearly related to "the domain of law".
Maybe you are referring to the fact that outputs are statistically correlated to training data, but there has been a large ongoing discussion about the very human process of collecting training data, with many moral questions worth considering.
> It is Statistics 101. Not only that, it is the most fundamental part of Statistics 101.
Any time someone says "It's [subject] 101" when applied to a complex topic, I tend to find they are oversimplifying to a fault. People tend to use the phrase when saying "it's not complicated it is very simple". (I usually hear people say "it's econ 101" while going on to repeat something like the myth of homo economicus.)
Collecting billions of datapoints and then training a system to act as an oracle capable of amalgamating all that data and speaking with apparent authority on it (while being well known to lie) is not a simple situation!
But the humans that create it do exist in that domain and so all we're talking about is tuning the stats so the outputs meet the set of philosophical demands.
We're literally a profession that gets paid almost exclusively to design autonomous systems that obey legal, ethical, and scientific criteria. What makes an LLM different from any other product?
This. I always wonder whether the people talking about ethics in AI and what not, really understand what "AI" means in terms the current state of technology
rebuttal: consider the computer software itself, being trained, emitting a model, replaying model contents on request and prompt. Now consider humans as they occur in social arrangements, daily trading information using human communication patterns, and including information regarding human situations with legal implications, damage, reputation, accuracy and fitness for purpose perhaps.. that human system of systems is where the results of commercial AI will be "consumed" .. not simply the model and its responses. Fitness for purpose immediately might include software security, chain of orders and chain of execution in business process where there are real results and real costs.. In other words, the AI products are used for real things.
Simple. Metaphysics is not math. Ethics is not math. Really, the only intersection is formal logic (until a certain German/Austrian mathematician blew it all up with his annoying theorems)
But applied mathematics can have ethical impact -- e.g. the concept of whether a human should trust the output of a particular language model. So GP's idea of 'trust' not applying because an object has its basis in math seems like a false dividing line. Ultimately everything can be grounded in things such as math as far as we know, although its not useful to reason about e.g. ethics from thinking about the mathematics of neuronal behavior.
This is not true. Lots of things have no mathematical foundations because it is impossible to state them formally/symbolically. If you can not specify it formally then it is not mathematics. AI is mathematics because software/code/hardware is mathematics so all the hullabaloo about "safety" makes absolutely no sense other than as a marketing gimmick. Even alignment has been co-opted by OpenAI's marketing department to sell more subscriptions.
But in any event, the endgame of AI is a machine god that perpetuates itself and keeps humans around as pets. That is the best case scenario because by most measures the developed world is already a mechanical apparatus and the only missing piece for its perpetuation is the mechanical brain.
As usual, I can build this mechanical brain for $80B so tell your VC friends.
I don't get this line of logic -- of course software has safety implications, because people use it for things in the real world. It isn't "math' that is cleanly separable from the rest of humanity; its training data comes from humanity, and it will be used towards human goals. AI is entangled with the rest of human dealings.
Whether AI poses existential threats for us or not, I'm open to either direction, but that the experts (e.g. Hinton, LeCun) are divided is reason enough to be concerned.
The way safety is handled in real world situations is through legal and monetary incentives. If the tanker you are driving to the gas station blows up then people get fired (no pun intended) and face legal repercussions. This is the case for anything that must operate in the real world. Safety is defined and then legally enforced. AI safety is no different, if an AI system makes a mistake then the operators of that system must be held liable. That's it, everything else about extinction and other sci-fi plots has no bearing on how these systems should be deployed and managed.
I have no idea what people talk about when they say LLMs must be safe. It generates words, what exactly about words is unsafe?
>> "... ensure legal, ethical and scientific criteria are met."
Please... We are talking about stochastic models. This means that we are in the domain of Math, not the domain of Philosophy, and not Law either.
Evaluating a stochastic model, even a multivariate model, involves only two dimensions. Even if it is running on 10k GPUs. Even if it has been trained on billions of data points.
The two dimensions are:
...and that is all. It is Statistics 101. Not only that, it is the most fundamental part of Statistics 101.