They don't believe it's unconstitutional. They believe it conflicts with the Administrative Procedure Act from 1946. Something that they apparently believe the unanimous decision in Chevron from 1984 got wrong.
I know little about US law, but I thought one of the priciples of common law is that once a precedent is set then it is set forever unless changed by statute? Allowing a court to change precedents undermines the whole concept of common law doesn't it?
This is stare (“starry”) decisis, and while it is absolute vertically, it is less so horizontally. Basically the court can decide that it got it wrong before, but the 9th Circuit is bound by a higher court’s precedents (SCOTUS in this case): https://www.law.cornell.edu/wex/stare_decisis
With the caveat that the appeals courts set lots of precedents where the cases never reach the Supreme Court, in which cases the appeals courts are as bound by their precedents as the SCOTUS is by its precedents (i.e., not really).
Technically, SCOTUS, as the highest court, isn't beholden to anyone's precedent. This year's SCOTUS is just as legitimate as last year's, so they always have the power to overturn past decisions. Lower courts have to follow what higher courts decide, but SCOTUS has no higher authority. And sometimes, like in Brown vs. Board of Ed (which ended legal segregation), it's a very good thing for the Court to overturn its past decisions.
But in practice, to non-lunatics, stare decisis (the legal principle that says not to overturn, or even consider, topics that have already been decided in the past without an extremely good reason) is an incredibly important prior to bring into any discussion. If the court actually uses its power to completely rewrite the rules of how government works on a whim - and let's be clear, that's what this decision does - then there's no way for anyone to ever make a plan. Nothing is stable.
Unfortunately, at least 5/9 of the current Supreme Court are either lunatics or blatantly corrupt. Chevron was decided unanimously for a reason. There is no way to administer a modern state without that concept - which is why right-wing extremists are so happy to see it gone, because they don't want the state administered.
That likely depends on where you are and what the content actually is. In Denmark where i am located, it would definitely not apply in most cases. Of course, if you are a lawyer communicating with your client it would be protected, but that would be the case even without the disclaimer.
Interestingly, my experience is totally different. I loved working with XAML and C# for building Silverlight apps, and would have preferred using that across more platforms. Especially with a nice editor like Expression Bland that got folded into VS.
Totally had your experience as well. I thought that the bridge between XAML and the underlying object model was pretty rad.
There were quite some rough edges, especially surrounding dependency properties. Those had way too much boilerplate. Also their hip new event system needed some fine tuning too.
If Google set up a subsidiary in Europe, that ran the service for european users and was the only entity that had access to the data, this would not be a problem. The problem here is Google's insistence that US entities will be allowed to access european data.
It's really strange to me, that we allow this sort of beta testing on public roads. The car is doing multiple things in this video that is problematic with the driver being slow to react in order to see what it ends up doing.
This should not be something that is allowed on public roads by end-users, but rather on closed tracks by specialists. If they want to test it out on public roads, run the analysis and look at whenever it diverges from the drivers decisionmaking instead.
I work in the self-driving space. Before I did I was super hype about what Tesla were doing with self-driving and auto-pilot, but once I actually started seriously looking at the safety ramifications of a vehicle driving with these technologies I changed tune really quickly.
A massive problem, particularly with the more capable L4 style technologies is you get lulled into a false sense of security because it'll drive a decent amount of distance perfectly fine right up until it doesn't (and it's normally spectacularly bad when it fails).
Testing purely on closed track or from simulated analysis only goes so far, you definitely need to do public road testing before you can hand it to end-users. But the driver needs to do advance driver training so that they're more aware of the hazards, they need to always be ready to take over (as with an L3 system, even if you're developing L4), they need to fully understand the capabilities, ODD and behaviour of the software, you need to keep the durations/distances short to avoid driver fatigue and ideally you have a second person in the vehicle to keep them honest.
Letting end-consumers use this technology on public roads is insane. It feels to me like the reason Tesla do it is they've boxed themselves into a corner because they've already sold it.
My main concern is that Tesla will end up poisoning the well in a regulatory sense, and other SDCs that are focused on doing narrow verticals properly will get caught in the crossfire.
The common pro Tesla argument of delaying self driving costing lives is absolutely valid. But it doesn't follow that allowing Tesla FSD to run red lights is the best way to accelerate the adoption as a whole.
If the whole industry was as cautious as Waymo, I think the risk of regulation would be minimal.
This is orthogonal to how well done the system is. I am extremely impressed it works as well as it does. But it's not good enough.
> The common pro Tesla argument of delaying self driving costing lives is absolutely valid.
It would be if self driving Teslas were safer, but the evidence seems to suggest that they crash more often than regular drivers once you control for when it is used.
Tesla doesn’t have to be safer now to follow this reasoning.
If Tesla’s approach ultimately succeeds in getting FSD working and into the mainstream, then for each year earlier that happens than it otherwise would, up to about 30,000 lives are saved, millions of injuries prevented, and roughly $1 trillion USD in total lifetime value is preserved as the worldwide accident rate goes to zero.
In that context, FSD should absolutely be civilization’s next moonshot.
That’s to say nothing of the other economic benefits besides simply killed and maiming fewer people and destroying less property.
Interesting idea. But the only problem with it is the assumption Tesla can succeed. I am not sure about that.
The other, darker, possibility is that Tesla does so poorly in this moonshot that they poison the well for everyone else, some of whom might be more equipped to succeed.
There are a lot of legs to your hypothetical here. In particular assuming that Tesla's approach ultimately succeeds is a big leap.
If Tesla's approach ultimately turns into a dead end (and that is the outcome of the overwhelming majority of very ambitious tech projects) then all the time and resources spent on Tesla's approach could be seen in hindsight to have been diverted away from potentially fruitful projects and are actually making mainstream FSD happen later than they otherwise would.
FSD being civilization's next moonshot doesn't mean Tesla should necessarily get a free pass to test beta FSD tech out on public roads without scrutiny.
I find this line of reasoning strange - it's like if we only kind of figured out how to do heart transplants on pigs, and then immediately went full steam ahead operating on people - because otherwise it would've costed the lives of people who didn't receive the operation
Yeah, if you look at safety standards in other industries, this is really unacceptable. It should be stopped immediately.
Also, why do we allow car manufacturers to test their own software? Shouldn't it be done by a third party? And shouldn't that third party be the only ones allowed to push updates to the car?
I remember literally over 5 years ago hearing from someone at a big auto-manufacturer, and they just explained, they can't afford to have their cars known for killing people. They sell a shit tonne of cars, and if they start running people over they're done. It'd be an extinction level event for their brand, and probably a serious knock to the entire industry. Apparantly Tesla is happy to take that risk. it's not that Tesla is more advanced, it's that they're happy making claims that no other company in an industry obsessed with safety would make.
Imagine Volvo, but instead of Volvo you have a company that distinguished themselves by their lack of interest in safety.
Only recently looking at cars, what I have found interesting is collision detection and automatic breaking. It seems some manufacturers have a reputation for getting it right, and other manufacturers a reputation for a terrifying feature that drivers disable due to it going off at exactly the wrong time.
I find my father-in-law’s Volkswagen T-Cross terrifying to drive. If it’s not distracting you with shrill warning beeps and bongs, it’s getting confused and slamming on the brakes at every slick or shiny surface. It is unquestionably more dangerous than if it just left the driving to me.
Hard to understand how people have affection for this brand.
...Because they weren't daft enough to commit to emplying blackboxes with no means of formal proofing to a safety-critical operation. Musk's approach is a massive public safety no-no. The cost of specifying and proving through trial the capabilities of what Musk is aiming for is the work of several lifetimes. Musk and Tesla just fucking YOLO it, yeeting out OTA's that substantially change the behavior of an ill-tuned system whose behavior can't even be reliably enumerated; and sinking the operational risk in drivers on the road.
Sometimes, conspicuous lack of progress is a good thing. It isn't something you necessarily appreciate until you suddenly start having to confront the law of large numbers in a very real and tangible way. Some incremental changes simply are not feasible to take until they are complete. Level 3 automation is one of those...
There is no solution to self driving that doesn’t involve a black box. The safety of the system is easy to measure. When there are fewer interventions than accidents for a solid chunk of time, FSD will be safer. It could eventually reach 1 intervention per hundred thousand accidents, if you would just let them continue.
> When there are fewer interventions than accidents for a solid chunk of time, FSD will be safer. It could eventually reach 1 intervention per hundred thousand accidents, if you would just let them continue.
And in the meantime, I and other drivers, cyclists, pedestrians are subject to increased danger for what? Oh, Tesla's profits? Forgive us if we don't all see this as an acceptable tradeoff.
They aren’t in any danger. The guy driving in the video is crazy and not disengaging when the car is misbehaving. With your hand brushing the wheel, a person can regain full control of the vehicle well before there is any danger. And yes, I would like to see not only Teslas profits go up, because they are the only company doing self driving, I would also just like to see this project move forward. It’s the coolest project in the world and if it succeeds it will save millions upon millions of innocent lives.
Furthermore if you really were so edge-of-your-seat scared of traffic fatalities then Tesla would be at the bottom of your list. Why don’t you go do something about the droves of people that stream out the back of bars and into their cars every night? They kill thousands every year meanwhile Tesla has killed roughly zero people.
It really doesn't matter whether the driver should or should not be disengaging, there are many, many studies categorically proving that "allowing the driver to be mostly relaxed and not required, only to require immediate intervention in dangerous situations" is absolutely, empirically less safe. You can't just white wash it away by "oh well, it will get better". When? And don't mention a word about Elon's opinion on when. The guy has been promising "this year" every single year for nine years now. More realistic estimates have this a decade, or two, away, at the very earliest. And I have huge doubts that when it does, Tesla will be nowhere near it. Their phantom braking fiasco proves just how horrific Tesla's approach to testing is, throwing multiple releases out into the wild with less than 72 hours between them, for absolute safety features. Anyone who claims that those releases were subject to any form of rigor in testing whatsoever is deluded, and anyone claiming that testing it on the public roads is somehow acceptable is equally deluded.
I am very, very well aware of exactly what causes traffic fatalities. According to the software at my work, I have personally responded to 378 fatality MVAs as a paramedic. Please don't try to assume everyone is ignorant about realities - we are not blindered, and only physically capable of recognizing and responding to one danger at a time.
You can't ask people to use a driving aid and not end up less focused. With advertising, "infotainment" (which is really disguised entertainment), music, oustide environment and passengers it is already hard for a driver to focus on his driving. You can't expect any human being short of people paid for that to keep hands brushing the wheel and feet ready to slam the brakes.
Having said that I am not sure most people are safer for cyclists and pedestrians. FSD is in such a bad state right now that the tesla is driving in the streets at the speed 80y old people do. What I saw in a video is a car that drive at a similar pace to a cyclist, it is even much slower in the crossing sections.
And however much Tesla likes to say "Oh, yes, yes, the driver should be paying full attention", everything else they say and do says the opposite. Latest example is the update that rearranged some of the climate control and added/updated some larger hot buttons at the bottom of the screen. Not all functions are available to be pinned at the bottom. You get a limited choice, which includes Netflix.
So to be clear you can have an always available hot button for Netflix, but not for climate control. All Tesla's handwaving is entirely bullshit. "The driver is in the seat for legal purposes only. The car is driving itself."
> There is no solution to self driving that doesn’t involve a black box.
LIDAR greatly reduces the "black box" necessity. It basically allows you to do things like "if object is in the way then hit brake/move elsewhere", where the sensor doesn't really fail in good weather.
Given its safety over DL-only solutions, this should be step 1 to getting to FSD. Not reckless beta-testing with black box techniques.
Tesla has chosen the cheap way, which is also the irresponsible way.
I'd rather my car's safety systems be later to market but proven safe, than early to market and have me and the others around me as unpaid beta testers.
This isn't even beta testing, it's closer to development. The driving shown in the video is that of a drunk or texting driver. For at least half of those turns it couldn't even pick a line and stick to it, instead swerving haphazardly as it became more aware of where the lane was. Never mind the constant turns into bus only lanes. I'd love to see municipalities start writing tickets on all the well-documented violations in these videos, citing Tesla itself as the driver.
Well said. I agree up to one point, I know that beta software is also tested on public roads in the industry, but only by trained drivers. As of some project I did in the past, I was drivenin in such a car, when I got a lift by a collegue to the meetings. It was about 15 years ago. From outside it was an old model, but inside the electronics where all new.
“Fake Cities” don’t have nearly enough complexity.
I agree they could start there, but you’ll graduate quickly without having learned much.
The main question is, are we willing to put people in harms way today for the benefit of future humans? The answer seems pretty obvious to me. Drunk humans are considerably worse than this and are not going away anytime soon. If we can solve self driving just 1 year earlier it’s equivalent to saving 30,000 American lives.
Put another way, if you want rules that delay the advancement of self driving driving cars, you're effectively murdering 30,000+ American lives every year.
> If we can solve self driving just 1 year earlier it’s equivalent to saving 30,000 American lives.
That strawman argument works if you completely ban all human drivers the moment we solve self-driving.
The big question is, when exactly do we consider self driving solved that it can ban replace drivers? All current evidence points it's very, very far away, if ever.
Uh... You'll never stop running into problems that require a driver to take conttol. Automation is only as reliable as the sum of it's parts. Can't wait to see the first set of failures that prevent automated driving back of defective units under their own power without a human backup option.
I'm not trying to defend self driving to Tesla here, like, at all, but I don't agree with this statement.
Evolution has gifted us with a phenomenal device, the brain, but evolution is extremely cautious and conservative. There is very little reason to expect that human innovation won't catch up and surpass evolution. 10,000 years from now, I expect self-driving will work perfect. 10,000 years from now, our brains will (unless modified by our technology) largely be unchanged.
If safety is your #1 goal, you should be advocating for busses and trains. Those are infinitely safer than cars, self-driving or otherwise.
[edit] Transit is 10x safer than private cars. [1]
> The effects extend beyond individual trip choices, too: the report notes that transit-oriented communities are five times safer than auto-oriented communities. Better public transportation contributes to more compact development, which in turn reduces auto-miles traveled and produces safer speeds in those areas. On a national scale, too, the U.S. could make large advances in safety if each American committed to replacing as few as three car trips per month with transit trips.
Busses and trains are safer than cars, but certainly not infinitely so. Nonetheless the infrastructure we have isn't built for them, and that won't change any time soon. If you want a suburban house with a yard, you need a passenger car. If you want to pick blackberries at the local farm, you need a passenger car. Making passenger cars safer through autonomy is clearly a good thing.
By all means, advocate for more transit friendly urban centers. I'm with you. Just don't take away autonomy out of spite. Better cars are still better, even if they're not the solution you want.
> Making passenger cars safer through autonomy is clearly a good thing.
I'd actually disagree with this stance. Making passenger cars safer through autonomy is probably a good thing if we can actually make it safer than human drivers. I've yet to be convinced we are anywhere close to meeting the bar on that if. I assume we will eventually, but I'm not even sure I'll live to see it.
It also ignores potential knock on effects, sure in isolation safer cars are better, but the reality is nothing exists in isolation. Could we save more lives if instead of spending the money we are on self-driving cars we instead invested it into our transit systems?
As an example of knock on effects, affordable cars feels like an easy win right? Makes travel easier for everyone. But by and large affordable cars are what has allowed suburbs to exist, but there's an argument to be made that urban sprawl is far from ideal and that we'd be better of with denser communities and public transit.
> What would convince you? Data from 60k cars isn't sufficient?
It would be if the data showed they were safer than human drivers, and was independently obtained. I have yet to see any data that suggests this or anything close to this.
Uh... no? I suspect you're referring to the Goodall preprint that did the rounds a few days ago. What it purported[1] to show was not that AP was less safe than regular driver, but that it was less safe than Tesla claimed. It still showed that it was (moderately) safer than Teslas being driven without active safety measures, which are themselves about 3x safer than average vehicle.
You seem to have taken the opposite conclusion, which is exactly what the feeding frenzy over the paper wanted.
[1] The methodology is hugely suspect: you can't take an incomplete data set and then just "correct" it by inventing axes that you pull in from other incomplete data sets that weren't studied or measured in the original! That's rank P-Hacking. It seems reasonable, but I guarantee that a talented statistician can push any such data set 2x in either direction with that kind of trick.
It's not a question of taking away autonomy, and there's certainly no spite about it. If you want to get around town, you have bikes or e-bikes. If you want to get out of urban centers, you can always rent a car at the periphery.
I've lived in SF for 10 years with no car and have never felt unable to do anything I've wanted at any time.
> If you want to pick blackberries at the local farm, you need a passenger car.
Not really, the farm can have a bus with regularly scheduled pick-ups or routes like a lot of the Napa wineries do.
You seem to be ignoring the last and most crucial point:
> If they want to test it out on public roads, run the analysis and look at whenever it diverges from the drivers decisionmaking instead.
It would be trivial to analyze the data after the fact to see where the AI model diverges from the human driver’s actions, decide which one was right, then implement the correct action. That wouldn’t slow down testing at all, as the only difference is who’s controlling the vehicle. In any beta test, someone still has to analyze the data.
How about we measure and see if FSD is killing people at all first? It's not an unanswerable problem, after all. There are 60k+ of these devices on the roads now. If the statistics say it's unsafe, pull it and shut the program down.
Do they? If they don't, would you admit that it's probably a good thing to let it continue?
Exactly. Let's measure. Is that rate higher than seen by median cars in the environment? I'd argue no, given how distressingly common that kind of incident is (certainly it's happened to me a bunch of times). But I'm willing to see data.
I think where you're going here is toward an assertion that "any failure at all is unacceptable". And that seems innumerate to me. Cars fail. We're trying to replace a system here that is already fatally (literally!) flawed. The bar is very low.
>I think where you're going here is toward an assertion that "any failure at all is unacceptable". We're trying to replace a system here that is already fatally (literally!) flawed. The bar is very low.
Failure is not the issue when it comes to Tesla FSD, accountability is.
For any mistakes human drivers makes, they have to pay up with money, have their license suspended, or with jail time, depending on the severity of their mistake.
You fuck up, you pay the price. That's the contract under which human drivers are allowed on the road. Humans drivers are indeed flawed, but with our law and justice systems, we have accountability to keep those who break the law in check, while allowing freedom for those who respect it. It's one of the pillars of any civilized society.
In my country, running a stop sign or a red light means you get your license suspended for a while. When a self driving Tesla does the same mistake, why doesn't Tesla's FSD AI have its "license" suspended as well? That's the issue.
What's the trolley problem have to do with this situation?
Are there accidents where death is unavoidable? Yes, they happen every single day, but after the investigations and trials are over, the parties found responsible pay up for those deaths in either money or jail-time, or both.
Does that mean we should we allow machines to make deadly mistakes, especially when death IS avoidable? Absolutely not. We sentence humans for such mistakes. Machines (either their operator or their manufacturer) should also have the same liability.
Those are two different things which you're trying to spin into a strawman.
Let's say you are on an overpass above a train, and a very fat man is in front of you. The train, if it isn't stopped, will kill 10 people on the tracks. But, if you push the person in front of the train, it will kill 11 people, and one of those would be you committing homocide.
Even if it means that a loved one of yours would get run down by one of these, it's ok in the end, because it helped improve some billionaire's beta tech-demo?
FSD isn’t a monolith, where speeding up one company gets us to the goal faster. We don’t even know if it’s possible with current tech, let alone with just cameras. Slowing down tesla might be just making a dead end safer. We don’t know, which is why we need safety standards.
Exactly! I expect to see nothing but negativity here regarding this.
Making intelligence out of silicon isn't easy. Let the computers learn this way during the transition period, and finally we can remove human drivers from the road.
More than 38,000 people die every year in crashes on U.S. roadways. And Tesla makes the safest cars:
The Insurance Institute for Highway Safety (IIHS), the main independent organization that conducts crash tests on vehicles in the US, released the result of its latest tests on the Tesla Model Y and confirmed that it achieved the highest possible safety rating.
>The Insurance Institute for Highway Safety (IIHS), the main independent organization that conducts crash tests on vehicles in the US, released the result of its latest tests on the Tesla Model Y and confirmed that it achieved the highest possible safety rating.
That doesn't mean that Telsa makes the safest cars. There are roughly ~100 cars with that rating, and nothing suggests the Model Y is safer than any of the others. It's also important to note that that rating isn't based of real world data such as how often drivers actually crash and hurt other (ex, how often FSD fails), but rather how well the occupant is protected in the event of a crash.
It is faster this way. And anyway it would be impossible to simulate the real world scenarios in the synthetic environment. One of the potential benefits of FSD is that it will save lives, hence should we go slow about it or take a reasonable risk and get it done. There is a risk in going slow too.
I find it hard to believe Tesla has remotely exhausted their "dry-run" training options considering how much like a drunken 8 year old their cars behave on FSD.
The FSD AI shouldn't be connected to the real-world controls until it very rarely substantially deviates from what the human drivers do in their cars while controlling a virtual car from the real-world sensor inputs. And in those cases where it deviates, the devs should manually dig in and run to ground if it would have hit something/someone/broken the law. Not until that list of cases stops growing, particularly in your dense urban areas full of edge cases, do you even start considering linking up the real car.
From what I'm seeing they're instead turning Tesla drivers into training supervisors with the real-world serving as the virtual one, while putting everyone else on/near roads at risk.
It's criminal, and I expect consequences. It's already illegal to let a child steer from your lap, and that's without even handing over the pedals. People operating "FSD Beta" on public roads should be at least as liable, where are the authorities and enforcement?
> One of the potential benefits of FSD is that it will save lives
But that doesn’t mean that tesla will save lives. Maybe they do a bunch of this and learn that cameras alone aren’t sufficient, and then waymo wins. Tesla wouldn’t have saved any lives, only killed a couple people unnecessarily.
Medicine is the classic example of applying this kind of thinking. You could do all sorts of unethical medical testing to speed up medical research, saving countless lives down the line, but we don’t because it doesn’t make it right.
With another medical example, you could roll out snake oil without testing it thoroughly because it’ll save lives if it works. But maybe snake oil doesn’t work, and it’ll be some other thing that works, and by rushing the snake oil, you just made things worse.
>Medicine is the classic example of applying this kind of thinking. You could do all sorts of unethical medical testing to speed up medical research, saving countless lives down the line, but we don’t because it doesn’t make it right.
Bringing up medicine for your stance might back fire. There are tactical examples where policies to "go slow and reduce risk" have cost lives. For example, testing some medicine on pregnant women is bad for the fetus - so there are policies to "not test on anyone who might be pregnant", and as a result, there are few studies on women between the ages of 20 - 80, and women's health has suffered as a result
The point isn’t “go slow.” The points are “this area of ethics is well studied and much more complex than ‘wild west research saves more lives’” as well as “society has rejected this particular form of utilitarianism.”
Why do you discard the possibility that Tesla will save lives in the long term? You may say it is unlikely, but it is not like Musk did not deliver world scale breakthroughs.
Also, regarding the medicine, do you really believe we do not do "unethical" medical testings? I guess it depends on your ethical standards and how high they are :)
But let's get back to cost benefit trade off. COVID vaccines tests were rushed. So it is obviously sometimes worth it.
> without trying to kill people
Do you suggest that Tesla is trying to kill people? That would be a ridiculous statement.
I bet the risk of getting injured by Tesla beta version of FSD is miniscule compared to a risk of getting into accident caused by a human driver. I am not for banning either of them. Even when we get to the point where FSD is much safer than drivers I would be against of banning humans.
>I bet the risk of getting injured by Tesla beta version of FSD is miniscule compared to a risk of getting into accident caused by a human driver.
You can bet all you want, but human drivers, as flawed as they may be, are all fully liable by law for any mistakes they make at the wheel and have to pay with money or jail time plus losing their license.
Who is liable for the mistakes FSD makes? Who goes to jail if it runs down a pedestrian by mistake? Elon? The driver? Can the FSD loose its license like human drivers can for their mistakes?
You can't compare a human drivers to FSD safety when FSD has zero liability in front of the law and all the blame automatically goes to the driver.
> Who is liable for the mistakes FSD makes? Who goes to jail if it runs down a pedestrian by mistake? Elon? The driver?
Yup. The driver. Aside from image, there appears to be few if any incentives for FSD to improve beyond the “it does the right thing 80% of the time” mark.
> there appears to be few if any incentives for FSD to improve beyond...
I think it is simply not true. Creating a FSD that could replace human drivers eg in case of trucks is potentially highly lucrative and it would make the economy more efficient.
Perhaps, though, there should be some minimum bar before we allow testing on public roads. The Tesla FSD beta videos I've seen thus far are truly alarming. The system is nowhere near ready for testing in the real world, where it poses significant danger to many innocent bystanders.
> One of the potential benefits of FSD is that it will save lives
Continuing this line of logic, the most advanced FSD would save the most lives. Thus the argument could be made that Tesla should abandon their research and license Waymo FSD.
MindGeek could obviously have done more. But i don't think you are being transparent about who promoted and ran that petition.
It was heavily promoted by Exodus Cry, an organisation that seeks to completely ban porn, strip clubs and any type of sex work: https://en.wikipedia.org/wiki/Exodus_Cry
This whole thing has lead to a lot unintended consequences that has significantly hurt, or impacted sex workers negatively, such as the MasterCard changes that made Onlyfans try and get rid of sexual content.
I think it depends heavily on what area you want support for. EC 2 support is generally very good, billing support is reasonable if a bit slow. Support for a lot of their managed services, or media services is beyond useless. All the support engineer does is take your complaint and says they will check with the internal service team. Your ticket will just end up in "waiting for Amazon" state for weeks or months.
That's the claim by Xsolla. But Netflix specifically say they don't do that:
"We have no bell curves or rankings or quotas such as “cut the bottom 10% every year.” That would be detrimental to fostering collaboration, and is a simplistic, rules-based approach we would never support" from https://jobs.netflix.com/culture
1. No personal information at all. It only says valid or not valid.
2. Name and date of birth
3. Foreign travel, with name, date of birth as well as information about test type or vaccination type etc.
I think that highly depends on the service. The new App Runner service for instance is a wild ride of buggyness, lack of testing and incorrect documentation.