“We don’t train on your data” doesn’t exclude metadata, training on derived datasets via some anonymisation process, etc.
There’s a range of ways to lie by omission, here, and the major players have established a reputation for being willing to take an expansive view of their legal rights.
Right - the scraper operators already have an implementation which can use the HTML; why would they waste programmers time writing an API client when the existing system already does what they need?
Because if you own your car you leave your golf clubs in there just in case you get invited for a round.
If you are the type who is willing to be seen in a used car you can save a lot of money since the rental car needs to be newer cars just in case someone who wouldn't be seen in a used car wants one - and this adds a lot of cost.
Not really, what keeps every media outlet doing that is the fact that China is the biggest economy in the world, and is an active enemy of western bourgeoisie. That is explicit defined in USA/UK and all major central capitalism countries currently.
You want another example of western hipocrisy? Everyone started worrying about a "massacre" on Xinjiang, WITHOUT ANY EVIDENCE (the source was... Radio Free Asia, which is CIA). But then, the Palestian massacre came to news again with Israel large-scale deleting women and children from existence, and suddently everyone forgot of Xinjiang and genociding middle-east people is allowed. Wonder why?
However, for low-density areas, the bias should be too low to notice.
Trees grow closer to one another in dense forests - perhaps dividing the exclusion radius by the density factor would keep the bias low enough to be invisible?
Ten years ago, if it didn't understand what I meant, it told me so after 1-2 seconds.
Now, it'll show a loading indicator for 5-6 seconds and then do nothing at all... or do something entirely unrelated to my request (eg responding to "hey siri, how much is fourteen kilograms in pounds" by playing a song from my music library).
> or do something entirely unrelated to my request (eg responding to "hey siri, how much is fourteen kilograms in pounds" by playing a song from my music library
My personal favourite is Siri responding to a request to open the garage door, a request it had successfully fielded hundreds of times before, by placing a call to the Tanzanian embassy. (I've never been to Tanzania. If I have a connection to it, it's unknown to me. The best I can come up with is Zanzibar sort of sounds like garage door.)
I'm amazed more AI tools don't have reality checks as part of the command flow. If you take a UX-first perspective on AI - which Apple very much should - there's going to be x% failures to interpret correctly, causing some unintended and undesirable action. A reasonable way to handle these failure cases is to have a post-interpretation reality check.
This could be personalized, 'does this user do this kind of thing?' which checks history of user actions for anything similar. Or it could be generic, 'is this the type of thing a typical user does?'
In both cases, if it's unfamiliar you have a few options: try to interpret it again (maybe with a better model), raise a prompt with the user ('do you want to do x?'), or if it's highly unfamiliar, auto cancel the command and say sorry.
Apple shot themselves in the foot in the late 2010s by switching to deep learning methods and making things slower and worse, with the spell checker being the worst example.
I don't have anything in my music library, both Siri and Alexa (via a Spotify account I don't have) have responded to "${room I'm in} on" with their versions of "I can't find ${room name} in your music collection".
I'd argue this comes back to "written by people who do not have to follow them on a regular basis".
reply