Hacker Newsnew | past | comments | ask | show | jobs | submit | nevon's commentslogin

I'm sure part of it is so that marketing can say that their TV has new putz-tech smooth vibes AI 2.0, but honestly I also see this same thing happen with products aimed at technical people who would benefit from actually knowing what a particular feature or setting really is. Even in my own work on tools aimed at developers, non-technical stakeholders push really hard to dumb down and hide what things really are, believing that makes the tools easier to use, when really it just makes it more confusing for the users.

I don't think you are the target audience of the dumbed down part but the people paying them for it. They don't need the detailed documentation on those thing, so why make it?

I have almost the same experience. I'm not running my own ISP and I'm not in a country known for originating DDoS attacks (Sweden), yet just using Firefox on Linux seems to be enough to be forced to click on traffic lights many times an hour. If I'm using Mullvad VPN that accelerates to almost every minute. CloudFlare claims to support privacy pass, but their extension implementing it seems to do absolutely nothing.

> I'm not running my own ISP and I'm not in a country known for originating DDoS attacks (Sweden), yet just using Firefox on Linux seems to be enough to be forced to click on traffic lights many times an hour.

I'm in the same situation. Linux, Firefox, Sweden, with a residential IP that has been mine for weeks/months. Who's massively DDoS'ing with residential Telia IPs?!


You know, after reading your comment I decided to install and try chromium for few minutes and you're absolutely right. It did not ask captcha once. I opened the same websites where cloudflare always asks me for captcha on firefox so I thought this was common, after finding this out, I am feeling annoyed.

While Chrome users should feel a shiver going down their spine.

Why would they? They're obviously on the 'right side of history'. \hj

Chrome browsers don't send a specific handshake. But while browsing other sites they help gather enough evidence for this being a human operated piece of software.

Same situation except for Linux: in Sweden, macOS, and Firefox.

The difference is: I can't get past Cloudflare's captcha for the past 2-3 years (on Firefox), have to use Chrome for the few sites I do need/want to see behind this stupid wall.

By now I've sent hundreds of feedback through their captcha feedback link, I keep doing it in the hopes at some point someone will see those...


Agreed. Same with Firefox on FreeBSD. Constant captchas. It identifies as Linux by the way (it seems to be compiled that way by the maintainers) which is probably better (a 2% desktop marketshare OS vs a 0.01% one is probably better here)

That link doesn't answer the question though. It states that the extension is reviewed before receiving the recommended status. It does not state that updates are reviewed.


They do, and it takes longer for updates to Recommended extensions to be reviewed as a result.

This is what the Firefox add-ons team sent to me when one of my extensions was invited to the Recommended program:

> If you’re interested in Control Panel for Twitter becoming a Firefox Recommended Extension there are a couple of conditions to consider:

> 1) Mozilla staff security experts manually review every new submission of all Recommended extensions; this ensures all Recommended extensions remain compliant with AMO’s privacy and security standards. Due to this rigorous monitoring you can expect slightly longer review wait times for new version submissions (up to two weeks in some cases, though it’s usually just a few days).

> 2) Developers agree to actively maintain their Recommended extension (i.e. make timely bug fixes and/or generally tend to its ongoing maintenance). Basically we don't want to include abandoned or otherwise decaying content, so if the day arrives you intend to no longer maintain Control Panel for Twitter, we simply ask you to communicate that to us so we can plan for its removal from the program.


That's great! They should put that on the website.


Not stated in the most diplomatic way, but I do agree. Having used CDK (not cdktf) and now being forced back to Terraform feels like going back to the stone age. It is absolutely obvious to me that generating infrastructure definitions from a regular, testable language using all the same tools, techniques and distribution mechanisms that you use for all your other software development is the superior way. Being able to piggyback off of the vast ecosystem of Terraform providers was a really clever move, although I understand it led to some rough edges.


Completely disagree. To me, and the OSI, none of those things other than redistribution and forking have anything to do with being open source or not. In fact, you could have a closed source project tick nearly all of those boxes, although that would indeed be very unusual.

I'm not sure if there is a term for what you are describing. Perhaps "community driven project".


That's the fun part; neither you, nor the OSI, get to make that determination!


I suppose that's true, but it makes it quite hard to communicate specific concepts if everyone gets to come up with their own definition of existing terms. I'm aware that language evolves, but at least at the moment, expecting projects to be community driven just because they use the term open source when describing themselves will set you up for conflict if they are referring to the conventional definition of the term and don't also happen to want to run a community driven project.


Do we work in the same company? That said, I really don't understand why everyone hates on Bitbucket. I really thought it was _fine_ from a user perspective. Now we're on GHE and I find it a sidegrade at best.

Now for the people who were operating Bitbucket, I'm sure it's a relief.


As a user, I found Bitbucket to be a lot harder when it comes to searching and browsing code. The Markdown formatting is also more limited for documentation and the lack of Mermaid support in Markdown documents was shocking to see considering how both of the primary competitors (GitHub and GitLab) have implemented it.


Don't send the client information about players they should not be able to see based on their current position.


How does it know what isn't visible? Can it handle glass? Frosted glass? Smoke? What if I can't see the player but I can see their shadow? What if I can't see them because they're behind me but I can hear their footsteps? What if I have 50ms ping and the player is invisible after turning a corner because the server hasn't realized I can see them yet?

To answer all those questions you either have to render the entire game on the server for every player (not possible) or make the checks conservative enough that cheaters still get a significant advantage.


GeforceNow begs to differ.

I know, not the same, but IMHO the future of anticheat. We just need faster fiber networks.


Yeah. Stadia worked well in ideal conditions, so for people lucky enough to live that life, the technology's there.


I never understoof why google gave up so early on cloud gaming. Clearly it is the future, the infrastructure will need to develop but your userbase can grow by the day.

I live a bit remote on an island group, and even though I have a 500Mbit Fiber, my latency to the next GeforceNOW datacenter is 60-70ms (which is my latency to most continental datacenters, so not NVidias fault). That makes it unplayable for i.e. Battlefield 6 (I tried, believe me), but I have been playing Fortnite (which is less aim sensitive) for 100+ hours with that.


And under such system, how do you stop people from abusing latency-compensation to make their character appear out of thin air on the opponent’s perspective by fake-juking a corner to trick the netcode into not sending the initial trajectory of your peeks?


Fortnite had this same issue when BR was first released. It was promptly fixed after cheaters started abusing it by adding more stringent checks.


Fortnite has a fairly invasive root kit level anti cheat too, don’t forget.


The invasive kernel root kit came months after they fixed the netcode abuses.


Then how would the client know where to render the positional audio of their footsteps for instance?


I'm coming from a place of complete ignorance here, so take my question as genuine and not trying to imply that this _should_ be an easy problem. But what exactly is it that makes it so difficult to have a KVM that lets me connect two computers to two high definition (2k in my case) monitors, along with some basic USB peripherals and audio components and switch between them? Every single device I've found has had some drawbacks like not supporting high framerates (144hz), not supporting Mac/Linux/Windows, only supporting audio output and not a microphone, not supporting thunderbolt or only supporting low resolutions.

Is it just that there's no market for it and that the cost of it would just be too high? If money was not an issue, would there still be technical reasons that this is impossible?


Because those signals are really high-speed, and the protocols are really complicated.

Doing the equivalent of "yank cable from PC 1, plug cable in PC 2" is just about doable at a reasonable price point. Anything more complicated either requires a bunch of expensive hard-to-source dedicated chips, or a bunch of hard-to-implement software solutions. Especially stuff like reliable keyboard-controlled switching or USB-C laptop connectivity is a nightmare.

In practice this means you either have to give up on features, or let the price balloon to unacceptable levels.


There already exists many implementations of this idea. CDK, Pulumi and Winglang are the ones that come to mind as probably the most well known.


I have a simple solution for you: don't join a union if you don't want to be part of one.


That option isn't always available, at least in the US. Unless you live in a right-to-work state, you may be forced to join the union as a condition of employment.

Somehow this is seen as "more progressive."


There is a conflicting tension.

Freeriding

If a union negotiates better conditions at a workplace, who should be subject to them? Everybody, of course IMO

But what of people who never paid union dues?

There is no nice tidy solution to that tension, only messy ones that impinge on a freedom somewhere

It is worth unionising, voluntarily


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: