One of the rules of Mario Maker is that in order to post a level, the creator has to clear it, to make sure that all posted levels are possible. But the creator of this level admitted recently that their clear of the level was tool-assisted. The team that was working on beating all the levels don't consider such "hacked" levels in scope.
StingRay is a brand name. The generic term is IMSI catcher. Basically a portable fake cell tower law enforcement can carry around that will capture the identities of every phone in the vicinity. They can also pretend to be 2G-only towers to trick phones to use an insecure protocol so more data can be captured.
Since it tracks everyone in the area, there's major privacy concerns about how they're used, and police have been caught repeatedly lying about their use. To the point that if the defense in a case discovered police had used a Stingray to identify their client and challenged it, the police would drop the case rather than letting anything about the Stingray enter the public record. https://arstechnica.com/tech-policy/2015/04/fbi-would-rather...
Even beyond ramping up a new project, it can be important for maintaining existing ones. If a service outage costs a company millions of dollars a minute, it can be worth it to keep a team around that can resolve the occasional outage a few minutes faster even if they have almost nothing to do the rest of the year.
There's four ways for code to not be copyrighted (in the US):
1. The author died more than 70 years ago or it was owned by a corporation and it's been 95 years since publication
2. It was written prior to 1989 and did not include a copyright notice.
3. It was written by the government
4. The author explicitly released it into the public domain
1 and 2 probably don't cover much code on the Internet. So unless it's a government repository and/or explicitly marked with a public domain notice, you can probably assume it's copyrighted.
Right but it depends where the device was sold. A Galaxy S22 intended for Europe is built with a vulnerable Exynos SOC but a Galaxy S22 built for the US has a Snapdragon.
If the answer to that is "results that the search engine thinks is most relevant to you", then that's probably a recommendation engine. If the answer is "results that are most recent" or even "results that many people have watched", then that probably isn't a recommendation engine.
You're acting like any kind of algorithm is automatically a recommendation engine that should terminate Section 230 protections, but I don't think it's that simple.
The most recent, the oldest, the closest match? That doesn't make it a recommendation system. Maybe try and read my post and make an effort to understand it rather than just responding with the first thing that comes to mind, because it is as if you have not understood my post at all and you seem to have not made any effort thereto.
Do you not recognize how lousy of a video sharing website this would be? Spammers are going to be constantly uploading marketing and other low-quality content with irrelevant keywords, while users that actually put work into making good quality videos will see their results pushed to the bottom quickly. How will you deal with that without implementing a system that can identify and recommend non-spam videos? Even the oldest versions of Youtube were boosting videos that got lots of likes.
>the closest match
How is deciding the "closest match" not considered a recommendation? They all have the user's keyword, what other criteria will you use?
>Do you not recognize how lousy of a video sharing website this would be? Spammers are going to be constantly uploading marketing and other low-quality content with irrelevant keywords, while users that actually put work into making good quality videos will see their results pushed to the bottom quickly. How will you deal with that without implementing a system that can identify and recommend non-spam videos? Even the oldest versions of Youtube were boosting videos that got lots of likes.
Not sure why that's my problem, I'm not the one making money by promoting reactionary videos to reactionaries.
>How is deciding the "closest match" not considered a recommendation? They all have the user's keyword, what other criteria will you use?
Because it's not a recommendation, some are better matches than others, thats' all. Some match the entire keyword, some just parts, some in different places... I don't understand what is difficult about this for you.
And what do you do when there's 10,000 exact keyword matches, how do you sort them? If it's newest the entire thing is just going to be spam accounts reposing the same video(s) on any major keyword.
"top", or anything notable is also likely to be gamed and abused too, especially if you fuzz "top" sorting because then its not really neutral, you're deciding the order and therefore making a recommendation.
Then there might be a circumstance where it is promoting something. Your point? The law shouldn't make this illegal because then YouTube would have to have greater regard for what it surfaces? I'm not sure that's a bad thing, that's the entire point of the thread.
My point wasn't the frequency of it but rather that it might be the case that some of YouTube's operations do work that way... so what? Is YouTube's convenience the point of law? No. So why does it matter?
>Not sure why that's my problem, I'm not the one making money by promoting reactionary videos to reactionaries.
The reason I think we should see it as our problem is because I think the solution companies arrive at is just to turn the internet into cable TV, where only approved media organizations are able to share content because of liability concerns.
I'm not sure why YouTube should be able to operate the service it does with the little content filtering it does. In what other industry would you be allowed to post child pornography because it's too difficult to make sure it doesn't get posted? No newspaper could take that excuse. Toys R Us couldn't say "oh jeez, we didn't realize that a corner of our store was being used by child pornographers to spread child pornography and also recruit children" and not be liable. I'm not sure why we think it's good to give an excuse to YouTube and Facebook for this and anything else anyone else would normally be liable for.
>No newspaper could take that excuse. Toys R Us couldn't say "oh jeez, we didn't realize that a corner of our store was being used by child pornographers to spread child pornography and also recruit children" and not be liable. I'm not sure why we think it's good to give an excuse to YouTube and Facebook for this and anything else anyone else would normally be liable for.
I'll admit, we may even be better off as a society of communication was less "democratized." There certainly would have been a lot less covid and election misinformation out there if every rando wasn't able to have their uninformed ideas broadcasted by giant platforms.
Exactly, I understand why section 230 is in place and what it achieved, but I do wonder what good it has actually done and whether or not we actually need it. perhaps we don't need to break up the big tech co's, and instead just make them as liable as any other business would be. in that sense, I don't think they could afford the conglomeration they have right now.
You don't want a spam filter in your email? Everything should go straight to inbox? And that's not even mentioning the processing they have to do to figure out whose email it is in the first place.
I fucking hate MS branding, every fucking name confused into multiple services (remember Skype vs Skype for business ? That was different tech stack?) I thought he was talking about outlook.office365.com.
Also, outlook.com redirects to outlook.live.com/owa ....