I agree. The main function of a component should describe what it does, similar to describing the algorithm in natural language. I have been following this pattern with success. It is much easier to understand things at a glance.
As if one's market value is always coherent with their skills. There are so many variables that plays a role on how much one earns: where you live, negotiation skills, marketing skills, getting competent people to evaluate you properly, micro and macro economics, willingness to deal with bullshit corporate stuff, etc.
Why most people are blind to the most important point of those models: the progress is moving at astonishing pace.
IMO it is a useless discussion to debate about the current SOTA or economics. In two years we will certainly have some breakthrough as we have been seen for the past years. And things are speeding up.
Sorry, I didn’t want to sound too sarcastic and negative: I don’t disagree with you. But looking from a broader timescale it is clear that in general we’re speeding up our technological progress in a logarithmic scale.
I have been following closely the NLP field for over a decade and the progress is speeding up. Can it stall? Sure. But everything is pointing out that it won’t.
I kinda disagree. We've made some leaps and bounds, but our actual "progress" has mostly been defined by orgs like OpenAI throwing money at the problem. It's only technological progress in the sense that grinding stone bricks is "technological" or "progress".
The difference between ChatGPT and Talk to Transformer is frankly not that large, at least when treated as a black-box. 90% of the people freaking out over ChatGPT on Twitter would have also freaked out over the original GPT, had they known it existed. The extra nuance that we're adding on feels like stalling, and while some of the optimizations have been cool (gpt-neo-2.7b, Stable Diffusion) it feels like we're hitting the top of our progress curve.
There are extremely interesting technical and economical critiques in the article. They may be right or wrong but at least they’re intellectually rigorous - much more than the average GPT commentary.
Your comment, in a nutshell and if I’m reading it right, is that it’s not worth to engage with any of these ideas because progress - whatever that means, however that’s measured - is fast.
Unfortunately the measurement if something was worth engaging in is only really able to determine its value in hindsight.
If it were 1890 we'd be talking about how our cities will soon be buried in animal dung which will lead to the collapse of mankind. The people debating that could not have reasonably foreseen in 100 years that CO2 would be the greater risk, and 100 years from now the greater risk will be something most of us have not imagined.
Are the issues you point out worth talking about? Of course, but are they worth the amount of time and effort that we will debate them? Get back to me in 5 years and we'll see.
Gpt-3 came 3 years after transformers were invented. Now we are close to 3 years after Gpt-3, did anything nearly as big happen during that time? Things aren't speeding up at all.
> we will certainly have some breakthrough as we have been seen for the past years. And things are speeding up.
In the past years we have seen a tremendous scaling of a) workforce, b) hardware, c) data. For me a breakthrough would be on data/energy efficiency? What areas are promising there?
Being honest, I am not that much concerned with what can I do as individual. If sh*t hits the fan we’re going to have to solve this at societal level, not individual.
As others already put out there: if society collapses heavily due to displacement of labour, money won’t matter much.
Unless advanced AIs are used to build weapons for those who own the capital, nothing can hold off angry mobs everywhere.
When we get to the point where human labour in general can be easily automated, that’s when capitalism like we know must be buried. We must find another way to live and prosper on this planet.
Innovation is really a weird thing. If we could show ChatGPT to people 10, 20 years ago I'm certain most people would be amazed by this technology.
Today, lots of people heavily criticize its limitations and thinks 40 bucks/mo is too much for this kind of tech.
Reminds me about newspapers headlines being skeptical when light bulb started being sold for the first time. 99% people seems to be fundamentally conservative and can't grasp innovation properly, nor understand the fast pace of evolution some things can have.
Not saying this is good or bad, just an interesting phenomenon to observe.
I think this is mostly people being (rightly) pissed off that this technology and the kinds of resources required to replicate it are in the hands of 1-3 companies, who try to add “safety” features that hamper it.
DRMed technology is almost worse than not having it - it’s tantalisingly close, but you can’t use it for what you want because a human (not the machine!) said no.
I find it amusing that so many people is skeptic about general AI, trying to come up with arguments that it will never happen, while at the same time we do not understand how our own intelligence work. Go figure out.
Mog: “What truly is fire? The divine blessing stolen by Prometheus? Concentrated Phlogiston? The element of change? Is it not madness to seek to create something we don’t even have a good definition of?”
Grog: “Grog rubs two sticks together” Lowers voice and looks around furtively “really hard.”
Fire is warm and it produces light, and it's easily observable in nature, it's a "thing" which humans could easily describe and recognize, before reproducing it. Someone was rubbing two sticks together, and noticed things became warmer, and warmer as they went, starting to feel like the warmth of fire and it happened.
In the case of AGI, I don't feel like we have a definition of done, so it seems kind of crazy to be chasing it / throwing money at it.
I didn't say say I'm against it, but does seem like a crazy way to go about it. Keep producing models that one day might mimic intelligence as we know it?
Grog could define fire in a very real way even back then, which is how he knew he’d so easily created it . It is hard for us to even know intelligence (the one we mean in AGI) when we see it, much less create it, no?
We can define intelligence in a very real, practical way now. We see and identify intelligence all the time in humans and in animals and in AI. We may not be perfect at identifying it (just like grog might mistake a rising sun for a forest fire), but we don't need a perfect mathematical or philosophical definition that we all agree on to create it. We just need to rub sticks together really hard.
The person you originally replied to and I disagree that we have any real, practical definition. I can recognize what humans and to an extent animals do as intelligent, but haven’t seen a definition that separates that intelligent behavior from them. I have never seen anything that’s been called ai do something I could call intelligent in that animal-like sense (though some have been impressive in the same way Google / page rank was impressive when it first came out)
So, I don’t see why rubbing these statistical model sticks should suddenly burst into intelligence, but I’m open to seeing convincing reasoning on that at some point. I wouldn’t invest time or energy in the meantime and like that original poster, think it’s kinda insane to if my goal was to see human-like intelligence emerge outside of humans
It is interesting that you can write thirteen posts on the topic without being able to define it.
It also seems very odd that you can differentiate between some things that you think are intelligent, and some things that you think definitely are not, yet you are incapable of extracting any sort of goal from that knowledge.
If you could tell us your criteria, perhaps we could help you with that...
I’m simply very curious about the subject, it’s super important :)! Given that, I’m also frustrated with what seems like a popular lack of critical thought and curiosity on the specifics.
In these comments, when I’ve talked about an intelligence I can distinguish, I’ve been talking about human / animal intelligence. AGI implies an intelligence independent of that, so I’m asking about the specifics there - what are we calling intelligence if not “what humans do”?
If we are calling it just that, then I’d argue everything I know about how these models do things is very different from what I know of how humans approach the specific tasks the models are built against. And I’ve read that that’s intentional. So, even with that sort of definition I don’t see how it follows that these approaches are on any linear path to AGI (maybe nonlinear if we learn limits and such from mistakes).
I’ve since read more of the article (it’s long, huh?) I like the framework they use from Roitblat in Section 2 - and again, don’t see how LLMs and such are on the road to fulfilling those criteria.
Fair enough, though I feel you are a bit too eager to push back against ideas that go counter to your initial thoughts. Of course, because I hold differing opinions, you could reasonably object that it is just what I would say!
I have a different idea of what AGI means: in my view, it is a retronym created in the 1980s in order to refer to AI of the sort Turing envisioned (which was more or less "what humans do") and differentiate it from things that were then being called AI, such as IBM's Deep Blue, which were mostly brute force applied to conceptually narrow problems.
You mentioned Roitblat's framework, and I would draw your attention to one aspect of it: it is not just a list of things that humans do, but those things which humans do considerably better than other animals, yet for all of them, there are other species that do them to some extent. As an evolutionist, I suppose there was a relatively recent time in the past when some of our ancestors or sibling species (all now extinct) had some or all of these skills to some intermediate level. In this view, intelligence is not an all-or-nothing concept, and achieving some of it is still progress.
Here's a view which you may not have seen: the pace of progress in AGI has not been constrained by an inability to define what we want, but by the pace at which we see ways to make what we see we need. For example, it is clear that current LLMs have a problem with truth, but it is not clear from what has been made public so far that anyone has a solution. Some people think that what's being done now with LLMs, but more of it, will be enough to get us to what will be generally accepted as AGI; I am skeptical, but I am willing to be persuaded otherwise if the evidence warrants it.
Not really. Build it, test it, notice it fails to meet what we expect of intelligence under conditions X, tweak it to fix that failure and repeat until we can't find any further failures. Then we'll have a formal model of intelligence that counts as a definition.
It's not hard to tell that it has not produced human-level intelligence, so the process outlined by naasking has not yet run into an insurmountable problem.
We use tons and tons of pharmaceuticals whose mechanism of action is poorly understood at best or in a few cases not at all.
We are even able to predict whether other compounds might work without knowing why just based on structural similarity.
It’s ideal to know the full mechanism and it obviously aids engineering but there’s no reason you have to wait for that to use something. People used fire for millennia before oxidation reduction reactions were understood.
All these types of comments are taking about using the results oh physical phenomena without fully understanding them. This is not the same thing as building human-like intelligence. We are using a Turing machine and we are bound by the limits of Turing machines. Human intelligence is nothing like that or no extraordinary evidence to that effect has been presented. Now, if you create a new computational model that is not equivalent to a Turing machine then maybe you will be on the road to something like but that’s not really where things are at.
In this case, it’s not so much the mechanisms of pharma we wouldn’t know, but what it is we’re even trying to cure - not having a definition of the disease or it’s symptoms and yet trying to engineer a cure would be pretty insane.
In response to your questions, ChatGPT says, "Intelligence is the ability to process information, think abstractly, and learn from experiences. In the context of artificial intelligence (AI), intelligence typically refers to the ability of a machine or system to perform tasks that typically require human-like thinking, such as learning, problem solving, and decision making. There is no one specific definition of intelligence that is universally accepted, and different researchers and practitioners may have slightly different interpretations of what intelligence means in the context of AI."
Seems a bit circular.
"What's a good definition of intelligence?"
"Human intelligence."
"What's human intelligence?"
"What we want AI intelligence to be like."
Just imagine if someone actually have success and an AGI just start happily chatting about everything, properly solving general problems like a 30 something engineer, mathematician or whatever.
No one in the planet would know how to be sure it's not just emulating human behavior.
Sorry but your argument doesn’t make sense. Engineering/building is different than understanding how things works. Surely is related: if you build something you usually can learn how things work - and vice-versa.
Insane to me is to affirm that something has a limit when you don’t fully understand it.
What are other examples of things we engineer before knowing what they are? I’m honestly having trouble thinking of any.
And I think it’s a shame we haven’t considered what the limits are on “intelligence” very much. Knowing them has been immeasurably valuable in software engineering and algorithms, for one.
I agree with you. I also think it is useful to understand limits. This is not what I’m arguing against, it is affirming limits without the knowledge to do so. Until we don’t really figure out how our intelligence works we can’t say we can’t reproduce it algorithmically.
We can find plenty of examples where engineering advanced beyond our understanding: bridges, boats, tables, etc. When humanity started building those we didn’t have the full picture of our physics yet we build and used those for more than 1000s years.
Again, I’m not saying we can’t engineer something from knowledge, I’m saying we can’t affirm limits onto something we don’t fully understand yet.
IMHO this paper is speculative and biased towards our human intelligence. From all the advances I have seen so far in the AI landscape, I’m growing more and more skeptic that our intelligence is something so complex that we can’t replicate.
That analogy doesn’t work. We knew we were trying to build a light bulb. There were properties of electricity, a complex physical phenomena, that we did not understand. However, we have a rigorous understanding of Turing machines. We have a nascent understanding of human intelligence.
Electron was discovered 1897 but the first lightbulb was 1802.
There are more examples as well, as another user has commented, we didn't know what fire is until relatively recently but we have been using it for thousands of years.
But either way, is knowing the electron knowing electricity? There are so many properties of it that can be known and manipulated without that insight- and indeed they built up that understanding to reach practical engineering and use of electricity. That’s what I think is being gotten at wrt intelligence.
“Knowing” something isn’t necessarily about being aware of its smaller parts.
The source you linked already cites it as the first arc lamp.
> But either way, is knowing the electron knowing electricity? There are so many properties of it that can be known and manipulated without that insight- and indeed they built up that understanding to reach practical engineering and use of electricity. That’s what I think is being gotten at wrt intelligence.
Yes. That is exactly my point. We don't need to entirely understand what intelligence is in order to be able to create it. The same way we didn't know what fire is, but we created it with no problem.
But we can hardly define intelligence, let alone “entirely understand” it. A child could give a good , practical definition of fire and manipulate it skillfully thousands of years ago. Not so much us grown adults wrt intelligence today.
That’s a good start, but doesn’t help us since even the most basic programs can fit that definition and are not what we mean when we say AGI. I have yet to see anything approaching a useful definition of intelligence across several discussions of impressive new language, and other statistical models - it feels like we should have that before talking about making it real!
With fire, even “when things turn from themselves to ashes and produce heat” (which I imagine a prehistoric child could come up with) distinguishes fire usefully from most other phenomena in the world
You are equivocating throughout this thread: you accept that we developed an understanding of fire incrementally, and concurrently developed fire-based technologies, yet you insist it is different for intelligence, without giving any good reason to think so.
The basic problem is that it seems pretty clear that human intelligence is not anything like a Turing machine and no one has presented any computational system not equivalent to a Turing machine. Some might conclude this is a foundational problem.
It does not follow, merely from noting that there are differences between human intelligence and a Turing machine, that no "merely" Turing-equivalent device could display human-level intelligence. Every attempt I have seen so far to carry through that argument either ends up begging the question or becoming an argument from incredulity.
Note that this is not an argument that it is possible, which certainly has not been conclusively established.
I think it is on those making the claim that a system equivalent to a Turing machine can have human level or superior intelligence. Humans are vastly more intelligent than any other organism we have encountered. I may be incredulous at the idea but the scale of difference between human intelligence and programs based on Turing machines is enormous.
The wholw point of turing machines is that they can emulate any computable system. You could have your a turing machine simulate every atom inside a human, and then you trivially have a turing machine that shows intelligence equivalent to a human.
> I do think high frequency rate monitors are a gimmick
Do you mean refresh rate? If yes, I used to think that but couldn’t be more wrong. Nowadays I have a 144hz monitor and I’m considering even going >240hz.
If you have the gear to push that much fps it’s a night day difference.
I agree with you. I don’t understand why some people get annoyed about this. Any desired life change starts with goal setting.
Easy to say “just do what you’re supposed to do” as if that phrase by itself had magic powers. It is known that motivation and habit is a fairly complex process, so I don’t get why we still get such reductionist comments around here.
We need more empathy and less criticism toward others, specially when they’re trying to improve themselves.