While I understand some of the concern for LLMs like ChatGPT, I have a very different point of view from people like the author of this article.
From an engineering and ‘getting stuff done’ point of view, starting with BERT models I have found transformer models solve very difficult NLP problems for all but the most difficult anaphoric resolution problems (as an example).
I have only had access to Bing ChatGPT for about 10 days but so far the search and chat results have been very useful. I think I have only had to give one ‘thumbs down’ rating, and even there some useful web links were offered.
I think that we are going to see a wide range of ‘products for creators’ in the next year based on OpenAI APIs and Hugging Face APIs and models you can run yourself.
When I talk with humans, even my closest friends and family members, I always evaluate what they say and don’t take things they say on face value. Why not just have the same attitude with systems built with LLMs?
Similarly, I am deeply skeptical of most everything I hear from all major news sources. I find their content useful, but I understand who owns them, what economic and political agendas they follow, etc.
So, I keep a healthy skepticism of what is produced by LLM based systems also. I see no AI Apocalypse.
The AI Apocalypse would occur on a more general level than current LLMs. It's when we give the next generation(s) of models every more control over society, without proper alignment. And then it makes dangerous decisions we didn't anticipate, because we don't fully understand how the models work and also we don't fully think through all the implications of asking the models to accomplish a task.
From an engineering and ‘getting stuff done’ point of view, starting with BERT models I have found transformer models solve very difficult NLP problems for all but the most difficult anaphoric resolution problems (as an example).
I have only had access to Bing ChatGPT for about 10 days but so far the search and chat results have been very useful. I think I have only had to give one ‘thumbs down’ rating, and even there some useful web links were offered.
I think that we are going to see a wide range of ‘products for creators’ in the next year based on OpenAI APIs and Hugging Face APIs and models you can run yourself.
When I talk with humans, even my closest friends and family members, I always evaluate what they say and don’t take things they say on face value. Why not just have the same attitude with systems built with LLMs?
Similarly, I am deeply skeptical of most everything I hear from all major news sources. I find their content useful, but I understand who owns them, what economic and political agendas they follow, etc.
So, I keep a healthy skepticism of what is produced by LLM based systems also. I see no AI Apocalypse.