> The image shows a young giraffe standing on a gravelly surface inside a fenced enclosure. The giraffe has a long neck, characteristic of its species, with a short mane running along the top. Its coat is a light brown color with a paler underbelly. The enclosure has a wire mesh fence and appears to be part of a larger outdoor area, with trees and additional fencing visible in the background. The giraffe is looking directly at the camera with its legs slightly apart, giving it a curious and alert appearance.
> anything unusual?
> Yes, the giraffe in the image appears unusual because it does not have the typical spotted pattern that giraffes are known for. Instead, its coat is a uniform light brown color. Additionally, the body proportions and the facial features look slightly different from a typical giraffe, suggesting it might be a unique or hybrid species, or possibly an artistically altered image. This giraffe also seems to have a different build, with a more compact body and shorter neck compared to most giraffes.
This article was posted on August 28, 2023. The giraffe was born 31, 2023. So presumably none of the models tested included this particular giraffe in its training data. To directly compare it to GPT-4o, which was released recently, is an invalid comparison. I wouldn't be surprised if GPT-4o does better on novel recognition tasks like this, but you'd need a new novel concept in an image to directly compare GPT-4o and the models tested in this post.
"This image features a young giraffe standing in a fenced enclosure. What's unusual is that the giraffe has what looks like an extra set of small horns, which are not typical for giraffes. Giraffes normally have two main horns (ossicones), but this one appears to have an additional pair above the usual two, possibly due to a genetic anomaly or variation. This feature makes the giraffe in the image quite distinctive."
It has learnt all sorts of invariances, almost certainly also that.
I've gotten some very weird results with 4o on images, it seems entirely possible to me that it would go off the rails if the image wasn't in the training data.
For this specific case, it's really not easy to test at all.
An invariant isn’t the same thing as a purple giraffe. One is an image manipulation applied at training time to make the classifier robust against transformations. The other is a thing that might someday exist in nature. (The most straightforward way is to dump a barrel of wine over the giraffe and take a photo.)
You're thinking simple image augmentations. These nets learn much more complex invariants. Basically to isolate concepts from irrelevant context. The point is you can't remove that image from the training data (not practically) and the experiment is pointless if it's in there.
This was the first thing I did and I intuitively thought it would be able to do it.
I then thought I’d see if it can MAKE an image of a giraffe with no pattern - but it absolutely cannot. Even after 10 or so attempts with different prompts and continually sending it back its own generations with feedback, every giraffe has a pattern.
Dalle 3 or similar are different models compare to LLM, but in the future OpenAI will probably allow generating image directly from GPT-4o without using the Dalle model (as showed in the demo page), I bet it would much better at following the prompts and generating unexpected things.
The point of this post is that AI is horrendously bad at anything not in its training set. Asking an AI that does have it in its training set about it totally misses the point.
> AI is horrendously bad at anything not in its training set.
The whole point of AL/ML algorithms is to find generalizations that explicitly apply to data outside the training set. Just because it gets some things wrong doesn't mean that it's bad at anything not in its training set.
Has anyone seen speed differences with the new gpt-3.5-turbo-0613 model? I've been testing for the past hour and I'm getting responses in about a quarter of the time.
This article is translated from Spanish using some kind of AI (see disclaimer at the top of the article). Because of this it's not clear what this article is actually talking about, ads or a new type of notification.
Indeed, it's vague and confusing, it sounds a lot like corporate speak for ads. "Valuable messages from our partners" sort of thing.
From the first few paragraphs it seems like they are describing some sort of toast notification for when they have some sort of announcement to make, but that wording could very well be a euphemism for targeted ads.
Yes I assume that's the same. What I mean is '0' obviously looks bad, '2007' as it was when I commented sounds good (to me anyway).
If you knew you wanted to have that copy on day zero, you probably wouldn't launch with it, because it doesn't look good, so I just wonder at what point people think it starts to be positive, or at least not negative.
You might be surprised, but many places have similar murder rates to Iceland. For example, Vancouver, Canada had 9 murders in 2019 [1], not far off of the ~2/year that Iceland has [2].
I am surprised about Canada. I always thought being nice and saying sorry were just Canadian stereotypes, but maybe it's because of a culture that doesn't promote violence?
I do not know what that has to do with the comment you replied to. The second part is false. As the only Vancouverite on duty here on HN at this hour, I suppose it is my duty to correct you.
When people say Vancouver, they are often referring to the metro region. 49.3% are of European ethnic origin[0], with the remainder a broad mix. There certainly is a large Chinese population. We have the second-largest Chinatown in N. America, after SF. We have a lot of Cantonese speakers here.
There was a Chinese-focused ride hailing service that flew under the radar back when Uber/Lyft weren't permitted. It feels like a separate world sometimes. I'd like to learn Cantonese and/or Mandarin, just so I could peek into it.
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/p...