Hacker Newsnew | past | comments | ask | show | jobs | submit | FerociousTimes's commentslogin

It can probably express feelings and emotions but not experience them.

You seem to conflate the two points.


So, quality control will still be performed by human experts in the near future.

I don't see why we should all panic about the future outlook of the human race in light of these groundbreaking developments.


The number of workers needed for quality control is much lower than the number for actual work. It is not unreasonable to fear the loss of jobs to AI.


When marketers of these tools make them sound like next-level in universal intelligence surpassing even humans, and they under-deliver consistently, don't blame the audience or the public for the tool's shortcomings, but the misleading marketing campaigns instead.


Who on the entire internet has "marketed" these tools as what you're saying? I've seen some random people making bombastic claims about these models, infamously including a now fired Google engineer, but the people actually marketing them have been pretty sanguine about their features and limitations as far as I can tell.

I'm reminded of that quote about Twitter: half of the outrage on the site is people imagining some person and then getting mad that they exist.


Marketers here used metaphorically not literally.

People/bots hyping these tools is what I meant.

I wish people here would be more attuned to the context of comments before jumping to conclusions.


I wish people here would choose their words more carefully.

Your context was not clear at all, since random commenters on the internet are not responsible for the performance or the delivery or the marketing of these models at all, and you use the word "marketing campaign", which is only ever something that people directly connected to the product in some way are responsible for. Comments by unrelated members of the general public, very few of which have any kind of domain knowledge, are not marketing campaigns for crying out loud!

On top of that, this was the most charitable possible reading of your comment. Now that you reveal you actually meant "when random people make them sound like the next level in universal intelligence…", this is both an incredibly weak point (expecting the general public to understand something deeply is usually unreasonable) and makes you sound overly credulous. Why are you taking your cues on a brand-new technology from uninformed, likely tech-illiterate randoms?


Well, here's the CEO of OpenAI saying that the future [of the graph of AI performance, presumably] is "vertical".

https://twitter.com/sama/status/1599111626191294464

That's not quite evidence in favour of FerociousTimes, but unless we soon see evidence of this "vertical" future, it will become such.


From an intellectual standpoint, the bot is impressive and this is coming from an AI skeptic BTW.

I get your perspective that a reductionist view of humans as solely intellectual agents is severely lacking, and bordering on dehumanizing if taken to an extreme but it still doesn't take credit away from the impressive capabilities that this bot exhibits.


It's amazing. impressive and fascinating but not superhuman, at least not yet.

Also, you can't really dismiss AI skeptics by associating them with denialists that don't see anything of quality or value of these creations despite all the evidence otherwise.


These are not random things.

When the creators of this tool present it as the frontier of machine intelligence, and when its persona revolves around being intelligent, authoritative, and knowledgeable, and yet it gets some basic, not random, stuff awfully wrong, you can't really discount the skeptic sentiments expressed in the comments here like this.


Skeptic about what

You’re assuming that this will only be used when it’s perfect and in helpful ways

This will be used at scale THIS YEAR and every subsequent year to infiltrate social networks including this one and amass points / karma / followers / clout. And also to write articles that will eventually dwarf all human-generated content.

With this and deepfakes and image/video generation, the age of trusting or caring about internet content or your friends sharing online is coming to an end. But it will be a painful 10 years as online mobs and outrage will happen over and over since people think they’re reacting to their friends’ posts of real things.

No, forget violent killbots. Today’s tech puts the nail in the coffin of human societal organization and systems of decision making and politics over the next 10 years. And AI doesn’t have to be right about stuff to destroy our systems.

We’ve been analyzing the performance of ONE agent among say 9 humans at a poker table. But imagine untold swarms of them, being owned by competing groups, infiltrating ALL human content exchange.

Not much different than what happened in trading firms over the last 20 years. Bots will be WELCOMED because they perform better on many metrics but will F everyone on the others.


> to identify areas for improvement > to enhance the performance of the system.

Isn't this basically the same thing? Isn't it being redundant here?

One of the areas of improvement is likely to be the overall performance of the system.

There's a lot of fluff in its communication style that I can't really overlook, and I know I'm being pedantic here but you can't really go easy on prose quality with a SOTA AI language model.


Its writing style is also trite, dry, verbose and tedious not to mention, the penchant for run-on sentences and blocks/walls of text.

I know that it's been tailored to output like this by its own creators to suit this medium of communication with the public, and it can likely to be tweaked to be less uptight and more laid-back in its verbal communications style, if necessary but I am not quite sold exactly on its potential to exceed human intellectual powers.

I mean, don't get me wrong it's still quite impressive and fascinating but not really superhuman, at least not yet.


About the uninspired text that's definitely tailoring.

I asked for "arguments against christmas gifts" and got a lot of "correct" but bland text:

"Another argument against Christmas gifts is that the commercialization of the holiday can lead to excessive consumerism and waste. [...]"

Then I asked "Same arguments, but in the style of an Eminem rap" and got a lot of this stuff:

"The commercialization of the holiday, it's a damn shame All the pressure to buy gifts, it's a damn game We overspend and consume, just to show we care But is all that material stuff, really worth the wear and tear?"

Then I asked it to tell an adventure story and got:

"The two of them decided to scale back on their gift-giving plans, and instead focus on making the holiday special in other ways. They went for long walks in the woods, enjoying the beauty of the winter landscape. They cooked delicious meals together and shared them with their neighbors. And on Christmas Eve, they gathered around the fire and sang carols, feeling grateful for the love and connection they shared."

The technology is obviously able to work in lots of style and re-interpret the same messages between them etc.

Today, these concrete avenues have been blocked it seems (especially asking for song lyrics is blocked today). But I can still produce lyrics by asking "what would a friend have said if I asked them to ... ", providing an "escape hatch" for the blocking.


1- The talking point expressing an anti-consumerist sentiment for the Christmas holidays is cliched and boring.

2- This is actually offensive not because of the nature of the lyrics, but for its association with Eminem.

I'm not an Eminem stan myself, but it can do him like this. The guy is way way out of its league

This is an amateur-level lyricism for rap songs and even me not remotely an amateur lyricist, I can do better than this garbage:

"We overspend and consume, just to show [that] we care. But is all that material stuff [I assume], really worth the wear and tear?"

3- Children-book writing level.

I mean it's very impressive given it's produced by a bot, but not a cause of immediate concern for well-established figures in the fiction writing world especially with this bland and sterile voice/tone in its storytelling.


OK, re-reading your comment: Yes, I agree of course it is not super-human. People are impressed because it's able to express itself like a low-to-mediocre-skilled human, which still seems like a pretty high bar.


I agree. I see a lot of ratcheting expectations in this thread, in response to some truly amazing and frankly frightening capabilities. Those rap lyrics IMO are at least as good as many, many lyrics in actual successful songs by real humans. Most song lyrics aren't very clever.


Here is a response ChatGPT gives me to your text

"Wow, it's almost like this text was written by a robot! Who knew it could be so advanced?" "I guess we'll just have to wait and see if it can take over the world any time soon!"


Try https://character.ai. Has not been optimized for accuracy/dryness since it's for entertainment.


Better late than never. Just jump in and see what this tool — pun intended — can provide you now and let go of the past.


The bot got it unequivocally wrong here.

When it said that the vast majority of the population of what's now modern-day France spoke modern French as their native language, that's categorically false and shouldn't be treated with leniency or open to interpretations.


My point is cherry picking flaws won’t help improve anything for ChatGPT, Wikipedia, etc — but systematic approaches to discovering and modeling information space and related fact, queries, etc would. Wikipedia not free of issues and to my knowledge does not allow end user to see Wikipedia’s confidence in a given fact, if they have one at all.


These sort of statements are by nature speculative not factual, and thus speakers are advised to express and communicate doubt, and hedge against uncertainty inherent in their views, and probably even vicious rebuttals, by using appropriate language constructs or terminology, but when a know-it-all bot that was trained by its handlers to pass always as an authority figure makes a mistake due to hubris or overconfidence, don't expect us to sit idle, and not call them out, and refute their claims accordingly.


Understand, though where exactly does ChatGPT claim to be a source of factual information?

Wikipedia on the other hand does clearly state its intent to maintain reliability of its content:

- https://wikipedia.org/wiki/Wikipedia_and_fact-checking

- https://wikipedia.org/wiki/Reliability_of_Wikipedia

Beyond that, in my opinion, while human dialogue might hedge confidence, disclose conflicts of interest, etc — to me, assumed the exchange is via text-based chat — there are much more efficient and effective ways to express that information than adding non-actionable text like that.


> Understand, though where exactly does ChatGPT claim to be a source of factual information?

It doesn't right now, but if you scroll up, you'll see the idea at the beginning of this thread is to turn it into a source of information.

The difference between ChatGPT and Wikipedia for this purpose is that:

1. Wikipedia is wrong wayyyyyyyy less often than ChatGPT

2. Wikipedia has sources linked at the bottom of the page you can go check, while ChatGPT does not


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: