It's that the guy who won this time is a horrible human being and is the head of a party that seems barely interested in governing, but more in enriching themselves even if it means sacrificing rights and privileges of those they govern. The guy that won last time is not as horrible a human being and was interested in governing for all Americans.
So when the oligarch heads of companies decide to donate to the new guy's big party, it's tantamount to simply ignoring all the truly shitty things the new guy and his party have done and have said they're going to do, because it's more important to the oligarchs that the real power in the land[1] look favourably upon them and their designs. It has been made abundantly clear that currying that favour costs money.
[1] Make no mistake, the US government is more powerful than all of the tech companies in the country and its new leader intends to wield that power as capriciously as he sees fit.
Couple is two. Few is four. Several is at least five. There is nothing for three, except for very large values of two or very small values of four (these are so uncommon in practice that we just use three).
> Then shalt thou count to three, no more, no less. Three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither count thou two, excepting that thou then proceed to three. Five is right out.
I was always taught that few is either three or four, and you cannot definitively infer which without more information.
But I think this just illustrates the point of the article: people are taught different things, and there has been different opinions on usage for hundreds of years, so we just cannot be definitive here.
If you need to convey a specific number, use that number. Even if you aren't completely settled on a number, but you need to tell someone to do something that will ultimately result in a number, do them the courtesy of fixing your indecision, and pick a number to tell them. If you want to convey some semi-amorphous magnitude, and the number ultimately doesn't matter in any concrete way, sure, you can use couple, few, or several.
Jassy's answer to that question contained this tidbit about "how to build a strong team? ... go into the office ..." At that point I was certain the RTO message was coming and it was just a matter of time. Turns out that time was today.
It's not that they necessarily need true reasoning. It would just be extremely beneficial, because that's what every human driver already has built-in (although exercised to varying degrees of success).
What about an axe that falls off the landscaping truck in front of you? Or a mattress? Or a harmless styrofoam cooler? The self-driving car does not know what those things are, but suddenly there is <something> in the air that will likely collide with you. The computer is going to be unable to predict how the mattress or the axe or the styrofoam will fly through the air, it can only decide to take abrupt, evasive action. It is also unaware that the person behind the car has been periodically looking down at their phone instead of watching the road, so an abrupt swerve or stop may still cause an accident. A sensible human driver would realize they are taking on additional risk by following such a truck and/or remaining in front of the distracted driver, and maybe decide to change lanes safely. The car's pretend AI has no idea of any of these things until something falls off and it has to react. We'll praise it when it gets it right--ooooh how ingenious! And the apologists will claim "there's no way it could have made a perfect decision--look how many other times it gets it right!" when it fails. And the realists will conclude "ha, stupid computer, told ya so".
Humans have reasoning, AI has zero aggressive instinct and instant reaction time. It could be a pretty even battle.
They might make really bad decisions in edge cases(Which will get less and less common as more cars get smart), but they might make up for it with perfect behavior in ordinary circumstances.
They will never prioritize convenience or speed or avoiding angering other people over safety. They'll do the safe thing even if no human driver could maintain that level of paranoia at all times.
Some time ago I had asked one of my Indian friends (who manages a software team in India and the US and has travelled the world, so he has a varied perspective) what he thinks is the biggest barrier to India's advancement? He said "Corruption. The bureaucracy and the government have so much corruption at so many levels". This sounds like a different way to state your mention of "management failures". Hopefully the additional worldwide scrutiny is a motivator for change.
“Corruption” is an easy scapegoat because it allows the electorate to believe that a new, clean and strong politician can solve all their problems. The reality is combination of low civic sense (why litter in the first place?), under-resourced enforcement (who is going to stop me from littering?), limited funding for infrastructure because of a lack of independent revenue sources available to city administrations, and bad policy that makes city administration effectively a puppet of the state (provincial) governments. In all this, corruption plays a role in making a bad situation worse.
I wanted to say corruption, because that's usually a simple root cause of many such failures... whether it's corruption because of nepotism or graft or just pure theft, the result is delivering something far less than should have been achievable.
However, on the global corruption scale, India is "not so bad". Even so, "not so bad" when applied to a billion+ people is bad.
lol You're on a website catering to people who dislike having others prove them wrong and will stretch "technically correct" as far as possible, in the face of all else. If computers didn't exist, they'd be lawyers. If lawyers didn't exist, they'd be clergy.
> Sorry, I forgot you did not specify. [...] I'm sorry for assuming.
No it is not sorry, and no it did not forget.
Why should I believe the computer forgot that I did (or didn't) tell it how to refer to me? It's a machine--why should it ever forget anything? What good to me is a machine I have to remind what I told it? And if I did not ever specify how to refer to me, yet it's claiming it forgot, then it's lying. What good to me is a machine that lies to me?
It doesn't understand what being "sorry" even means--it just said that because the model/context indicated it should say that. It cannot be sorry in any useful sense of the word because it cannot feel remorse or regret or guilt or shame or anything. The machine telling me it is "sorry" means nothing and doesn't indicate it has any great insight (indeed, it has no insight at all) into human feelings.
When adult humans do that, we call them assholes, or sociopaths.
The AI doesn't "feel" anything. It is simply attempting to predict the most human-like word next in a sequence of words, based off of training on millions of lines of human dialogue.
They could train the model against all of the millions of comments on hacker news, for example, and it would eventually respond to things in a way that was virtually indiscernible from the average user here.
If it says it "forgot" something, it has no actual memory. That is merely a conversational pattern/response that it picked up from common conversations it has been trained on.