Hacker Newsnew | past | comments | ask | show | jobs | submit | nawgz's commentslogin

You're right, financial freedom is completely unfulfilling, instead it's really meaningful and impactful to be involved in a tech economy whose primary value has been in undermining democracy and social systems!

The wild success of traffic lights disagrees with your statement.

The wild success of traffic lights is only wildly successful to those who aren't color blind. Do some reading.

Wikipedia: https://en.wikipedia.org/wiki/Color_blindness

> The colors of traffic lights can be difficult for red–green color-blind people. This difficulty includes distinguishing red/amber lights from sodium street lamps, distinguishing green lights (closer to cyan) from white lights, and distinguishing red from amber lights, especially when there are no positional clues (see image).

Publication from 1983: https://pmc.ncbi.nlm.nih.gov/articles/PMC1875309/

> All but one admitted to difficulties with traffic signals, one admitted to a previously undeclared accident due to his colour blindness, and all but one offered suggestions for improving signal recognition. Nearly all reported confusion with street and signal lights, and confusion between the red and amber signals was common.


What a horrendous counter-argument. "People with notable perception issues don't perceive the same" is insanely obvious.

People not perceiving in the same way (the original point) is exactly the same as "notable perception issues".

That's misunderstanding what the original argument is about.

You really think that people have been debating for thousands of years if colour blind people exist, with no conclusion in sight?


The wild success of traffic lights comes from having 3 colors at fixed positions. You put those 3 colors in a single color changing light and I would assume the accident rate would measurably increase.

The fact that a single emitter traffic light that simply varies its color doesn't exist also disagrees with your statement.

Which of these have been met with scorn by liberals? You seem to not get the idea...


[flagged]


After significantly more searching, you managed to cite less criticisms of Trump’s “good actions” by liberals than you managed to cite “good actions” themselves, and then to top it all off you tried to weakly justify that conclusion with some trite aphorism about individualism encompassing many outcomes.

Weak!


Yes, you're right, I should google to make your arguments for you!

Listing a bunch of white house links and then 2 criticisms (edit: he got it up to about 6 criticisms of marijuana legislation, wow!) which aren't even really about the action but more about the general malfeasance of the administration is an extremely weak supporting argument behind "liberals criticize anything good Trump does the same way conservatives criticized anything good Biden did", because we can identify plentiful examples of naked hypocrisy around the criticisms of Biden - see the autopen debacle for one hilariously manufactured self-owning example.

It must really be quite trying to justify Trump's actions, I'm amazed you have failed to use any of that energy on introspection.


The only example that has any traction in my view are web-shops, which claim that time-to-render and time-to-interactivity are critical for customer retention.

Surely there are not so many people building e-commerce sites that server components should have ever become so popular.


The thing is time to render and interactivity is much more reliant on the database queries and the internet connection of the user than anything else. Now instead of a spinner or a progress bar in the toolbar of the browser, now I got skeleton loaders and use half of GB for one tab.


Not to defend the practice, I’ve never partaken, but I think there’s some legit timing arguments that a server renderer can integrate more requests faster thanks to being collocated with services and dbs.


which brings me back to my main point of the web 1.0 architecture. Serving pages from the server-side, where the data lives, and we've come full circle.


I seem to remember that the DOM nodes themselves expose some pretty useful functions. I think it was in the context of detecting edge crossings for a graph router, but you were able to interact with the computed/rendered coordinates in this context.

Sorry that's not more useful and explicit, it was a while back and never went anywhere.


A mistake lies in thinking it’s a market, but it’s egregious you’d call it free


The free market is an analyzable simplification of the real market, however I think the assumptions hold in this case.

If a CEO delivers a certain advantage (a profit multiplier) it's rational that a bidding war will ensue for that CEO until they are paid the entire apparent advantage of their pretense for the company. A similar effect happens for salespeople.

The key difference between free and real markets in this case is information and distortions of lobbying. That plus legal restrictions on the company. The CEO is incentivized to find ways around these issues to maximize their own pay.


Python users don’t even believe in enabling cursory type checking, their language design is surpassed even by JavaScript, should it really even be mentioned in a language comparison? It is a tool for ML, nothing else in that language is good or worthwhile


> Companies cannot set prices arbitrarily

[Source required]

Edit: how are you downvoting me? Go look at corporate profit margins now, 10 years ago, and 40 years ago.

If you believe you can hand wave with simplified BS like "Supply and Demand" you probably have some heavy reading on price elasticity to catch up on.


[flagged]


Then why are profit margins bigger? Supply and demand as the reason for profit percentage increasing margin makes no sense. I’d be interested in how you’d debate that.


> Supply and demand as the reason for profit percentage increasing margin makes no sense. I’d be interested in how you’d debate that.

That was never my argument. The commenter I responded to edited his comment to add those points after I replied. This was his comment before:

> Companies cannot set prices arbitrarily

[Source required]


That is pretty obvious from where it says “Edit:”, what isn’t obvious is how Supply and Demand prevents companies from setting prices arbitrarily. Which is and always was what your comment said.


> this is a bipartisan issue

Where the instance upthread and your instance both occurred under the same president? lol


> a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system ... to keep [that] system up to date with ever changing threats

> The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail

A configuration error can cause internet-scale outages. What an era we live in

Edit: also, after finishing my reading, I have to express some surprise that this type of error wasn't caught in a staging environment. If the entire error is that "during migration of ClickHouse nodes, the migration -> query -> configuration file pipeline caused configuration files to become illegally large", it seems intuitive to me that doing this same migration in staging would have identified this exact error, no?

I'm not big on distributed systems by any means, so maybe I'm overly naive, but frankly posting a faulty Rust code snippet that was unwrapping an error value without checking for the error didn't inspire confidence for me!


It would have been caught only in stage if there was similar amount of data in the database. If stage has 2x less data it would have never occurred there. Not super clear how easy it would have been to keep stage database exactly as production database in terms of quantity and similarity of data etc.

I think it's quite rare for any company to have exact similar scale and size of storage in stage as in prod.


> I think it's quite rare for any company to have exact similar scale and size of storage in stage as in prod.

We’re like a millionth the size of cloudflare and we have automated tests for all (sort of) queries to see what would happen with 20x more data.

Mostly to catch performance regressions, but it would work to catch these issues too.

I guess that doesn’t say anything about how rare it is, because this is also the first company at which I get the time to go to such lengths.


But now consider how much extra data Cloudflare at its size would have to have just for staging, doubling or more their costs to have stage exactly as production. They would have to simulate similar amount of requests on top of themselves constantly since presumably they have 100s or 1000s of deployments per day.

In this case it seems the database table in question seemed modest in size (the features for ML) so naively thinking they could have kept stage features always in sync with prod at the very least, but could be they didn't consider that 55 rows vs 60 rows or similar could be a breaking point given a certain specific bug.

It is much easier to test with 20x data if you don't have the amount of data cloudflare probably handles.


That just means it takes longer to test. It may not be possible to do it in a reasonable timeframe with the volumes involved, but if you already have 100k servers running to serve 25M requests per second, maybe briefly booting up another 100k isn’t going to be the end of the world?

Either way, you don’t need to do it on every commit, just often enough that you catch these kinds of issues before they go to prod.


> maybe briefly booting up another 100k isn’t going to be the end of the world

Cloudflare doesn’t run in AWS. They are a cloud provider themselves and mostly run on bare metal. Where would these extra 100k physical servers come from?


From their desire to representatively test before they deploy to production?

Doing stuff at scale doesn’t suddenly mean you skip testing.

And just because they host stuff themselves doesn’t mean they couldn’t run on the cloud if they needed to.


Cloudflare infra costs are probably 300 mil+ usd. Their gaap profit is negative, their non gaap income is less than their infra expenses. Can you imagine how much they would have to charge more or spend more if they had to duplicate or simulate their production environment in staging and for each of the 100s deployments they probably do a day?

Their main cost of revenue is these infra costs.


But they are probably doing hundreds of deployments a day, so that would make their pipelines extremely long? Not to mention costs.


The speed and transparency of Cloudflare publishing this port mortem is excellent.

I also found the "remediation and follow up" section a bit lacking, not mentioning how, in general, regressions in query results caused by DB changes could be caught in future before they get widely rolled out.

Even if a staging env didn't have a production-like volume of data to trigger the same failure mode of a bot management system crash, there's also an opportunity to detect that something has gone awry if there were tests that the queries were returning functionally equivalent results after the proposed permission change. A dummy dataset containing a single http_requests_features column would suffice to trigger the dupe results behaviour.

In theory there's a few general ways this kind of issue could be detected, e.g. someone or something doing a before/after comparison to test that the DB permission change did not regress query results for common DB queries, for changes that are expected to not cause functional changes in behaviour.

Maybe it could have been detected with an automated test suite of the form "spin up a new DB, populate it with some curated toy dataset, then run a suite of important queries we must support and check the results are still equivalent (after normalising row order etc) to known good golden outputs". This style of regression testing is brittle, burdensome to maintain and error prone when you need to make functional changes and update what then "golden" outputs are - but it can give a pretty high probability of detecting that a DB change has caused unplanned functional regressions in query output, and you can find out about this in a dev environment or CI before a proposed DB change goes anywhere near production.


This wild `unwrap()` kinda took me aback as well. Someone really believed in themselves writing this. :)


They only recently rewrote their core in Rust (https://blog.cloudflare.com/20-percent-internet-upgrade/) -- given the newness of the system and things like "Over 100 engineers have worked on FL2, and we have over 130 modules" I won't be surprised for further similar incidents.


The irony of a rust rewrite taking down the internet is not lost on me.


20% seems to grow every time someone write about this.


I have to wonder if AI was involved with the change.


I don't think this is the case with CloudFlare, but for every recent GitHub outage or performance issue... oh boy, I blame the clankers!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: