Hacker Newsnew | past | comments | ask | show | jobs | submit | vault_'s commentslogin

> - A big queue of PR's for reviewers to review

This is a feature. I would infinitely prefer 12 PRs that each take 5 minutes to review than 1 PR that takes an hour. Finding a few 5-15 minute chunks of time to make progress on the queue is much easier than finding an uninterrupted hour where it can be my primary focus.

> - The of the feature is split across multiple change sets, increasing cognitive load (coherence is lost)

It increases it a little bit, sure, but it also helps keep things focused. Reviewing, for example, a refactor plus a new feature enabled by that refactor in a single PR typically results in worse reviews of either part. And good tooling also helps. This style of code review needs PRs tied together in some way to keep track of the series. If I'm reading a PR and think "why are they doing it like this" I can always peek a couple PRs ahead and get an answer.

> - You end up doing work on branches of branches, and end up either having to become a rebase ninja or having tons of conflicts as each PR gets merged underneath you

This is a tooling problem. Git and Github are especially bad in this regard. Something like Graphite, Jujutsu, Sapling, git-branchless, or any VCS that supports stacks makes this essentially a non-issue.


code review isn't about diffs, it's about holistic changes to the project

the point is not queue progression, it is about dissemination of knowledge

one holistic change to a project = one PR

simple stuff really


Where this breaks down, as I've experienced at least, is that the product management side maintains basically zero awareness of the production constraints engineers are working within. If you've built out a painting production line around spray guns and beige, that has knock-on effects as to what results are attainable. A PM asking for polka-dots next sprint is throwing into question the entire body of practice, but this happens with extreme frequency in software.


This I think is a broader problem of a certain personality of "idea guys" that lie about requirements, often without being aware of it, and it being our problem as engineers to read their minds and be two steps ahead of them.

Director: "We need a factory that's tooled around manufacturing boxes of cookies with blue frosting. We don't need any other colors, just blue frosting!"

Junior Engineer: "He's going to come back in two days and ask for red frosting. Better make sure it can do any color."

Mid Level Engineer: "He's going to come back in a week and ask us for multi-color boxes. Better make sure it can do any combination of colors in each box."

Senior Engineer: "He's going to have the big idea to add sprinkles, writing, and branch out into eclairs. Better make sure the factory is extensible and can retool itself per-batch to make each box to-order."

The worst part about this is you can see where uncertainty leads to over-engineering and consequences if your psychic senses are off. This is where a good PM steps in and forces folks like this into a roadmap on a long-enough time-scale that you can see the mid-level's design is all we'll need. And if you still want eclairs we can talk about that in 2026.


Seasoned Engineer: "He's going to come back in a week and ask for croissants, better focus on just making blue cookies now and worry about the future later".


This. Engineers talk about feature creep, but forget to mention configuration creep. You can make anything you want! Yes it’s taken two years and no we didn’t nail the principal usecase. But we can make a version 2!


Ideally engineers should have a sense of the business and so know which are expected feature creep and which are unlikely. So you design the blue cookies, but you leave extra space because next week you will be installing more paint guns so you can do more colors, but you limit it to 6 colors and when the CEO asks for more say we are out of space, do you really want to invest in more colors, eliminate and existing one, or the new idea. Crescents may be a good investment, are they likely enough and similar enough to do them on the same line. (if you cookies have nuts the correct answer might be we want a whole new factory even if there is commonality just so we don't need the "nuts also used in the facility that" makes this label)


Don't leave extra space. Ask the question and get a "no, we don't need that", then call out the frequent specification misses when the schedule slips due to all the rework. (Or that doesn't happen and realize they might know what they are doing)

Pain like this is critical feedback for a organization. Blunting it hurts more in the long term.


Probably a result of the patent dispute over the feature: https://apnews.com/article/apple-watch-patent-dispute-sales-...


I didn't realize they were actually selling a 10 Gbps service tier as part of this branding. It's never been available in my market, so I assumed that they were advertising the uplink capability of the thing my modem was connected to! Happy to see this go, but I'm still shocked to learn that the name was _less_ misleading than I had thought.


The article says it provides 10Gbps of service to 98% of customers upon request, which would be powered by fiber-to-the-home. I don't need 10Gbps, but I do want symmetric upload and download speeds. Does anyone know if it's possible to ask them to run fiber and have only an upload speed increase?


> The article says it provides 10Gbps of service to 98% of customers upon request

This part is funny to me because I've tried to sign up for their FttH and they declined despite it being in the area, and the same thing happened to others I know. I'm not sure how they came to that percentage but I don't believe it.


I suspect there's something like of the people who go through qualification and get an offer from Comcast 98% request to install it.

You probably didn't give them all the info unless you were ready to pay for it. And all the other people that get disqualified didn't count.


+1. I've read in various forums they will only install it if the construction cost is less than a few thousand dollars. This means they will say it's "available" on the order page, but then decline to install it.


I think you're asking for something like their 10G service, but at a lower cost and speed?

> The Comcast "Gigabit Pro" fiber connection that provides 10Gbps speeds costs $299.95 a month plus a $19.95 modem lease fee. It also requires a $500 installation charge and a $500 activation charge

I'm not sure that the pricing for that service actually pays for their installation and equipment costs, so I don't think you'd get much of a discount if you only ran it at 1Gbps symmetric. I did know someone who got the service and didn't bother to make the rest of his equipment work at 10G, so was only using a 1G port. And it works fine, but still costs $320/month + any other taxes and the $1000 install.


I had gigabit pro for a few years, they gave me a half off promo that made it worth it at $150/mo, which is not much more than the close to $100 after miscellaneous fees that the regular gigabit down 35Meg up HFC cable plan costs, not to mention the fiber reliability is so much better - no brief outages and I even had Comcast business proactively reach out to replace gear when their monitoring noticed the fiber switch starting to fail.

I think they also discounted the install and activation to be $500 total.

I split it among around 8 housemates which included the upstairs unit of our house, so it ended up being very affordable and there was always extra bandwidth to go around. The main benefit I enjoyed being greater than 35Mbit upload speed.


> provides 10Gbps of service to 98% of customers upon request

this sounds like PR doublespeak weasel words for burst vs sustained.


Oh it's on-request. I followed their marketing link and it only offered me 1G so I assumed it was unavailable. Their big advantage is they have good coverage and many municipalities will preserve that by preventing other telecom companies from putting their alternative technologies in (say FttH).


You should push to have fiber. Once you get 1gb symmetrical (in my fortunate case after moving), there is no going back.

Not-fond memories of getting through to Crapcast support to resolve outage (e.g. cable laid in 90s failed) and then being pitched a "a great deal just for you" of "upgrading" to get catv sh*t package, as I waited.

Damn though, Crapcast did get to IPv6 fast and that specifically was solid in my previous house.


Last I knew, they still needed to finish working with their counterpart monopoly on their collaborative new Xumo device and get all the systems lined up to use it.

Then they need to kick the little old grandma's still watching traditional cable off their network and set them up on a new Xumo streaming box instead. Then they drop the old video channels and use their frequencies to provide faster service on the same old copper wires.


Not sure this is right. DOCSIS4.0 (which I think is what you are referring to?) doesn't require TV channels to be moved off plus it can coexist with existing DOCSIS3.0/3.1 (I think the plan is to actually bond 3.0, 3.1 and 4.0 channels together - much like how most 3.1 rollouts actually are majority 3.0 channels for BC purposes).

DOCSIS4.0 does use higher frequencies though and this requires a lot of additional work to upgrade the infra to support this.

I think what Comcast is calling '10G' is the fact you can now order a totally new FTTH run which doesn't use coax instead.

Tbh it's a confused strategy. If you're going to offer XGS-PON to everyone, why bother with DOCSIS4.0? It doesn't really make sense to run fibre runs just to one customer, you could probably do a whole street in not much more time.


I don't know how coax internet works, or how the channel allocations work, but it seems to me if they can offer 2Gbps/200Mbps already why can't we opt for a channel reallocation and get like 1Gbps symmetrical, or at least 1Gbps/500Mbps or something?

I do understand the legacy channel allocations were designed for almost entirely download - but 2Gbps? That can't be...


The way those cable modem systems work is essentially laying a data channel (usually several) on top of the existing coaxial cable network, similar to DSL laying data on top of the existing telephone network. However this means the cards which transmit and receive are still very much analog beasts, pumping out some incredible signal levels to as far as possible. Similar to DSL, the download centric focus is built into the design. Also, your small modem can't scream nearly as loud as the downstream signal can so some signal loss is more likely, limiting the upload channels. Finally a cable modem network is usually quite shared, with something like 8 transmission lines feeding entire neighborhoods or cities. Depending on node congestion you may not even get your advertised speeds. At least with DSL your line is basically dedicated to you lol


> I don't know how coax internet works, or how the channel allocations work, but it seems to me if they can offer 2Gbps/200Mbps already why can't we opt for a channel reallocation and get like 1Gbps symmetrical, or at least 1Gbps/500Mbps or something?

Because they cannot actually offer it, it is all marketing bullshit.

Always assume coaxial upload bandwidth is slim to none. They probably just advertise a burst speed you get for 5 seconds. If it is not symmetrical, it is not real in my mind.


How do burst speeds work? What is happening when you get, like, 10X faster upstream speed - for an instant - and then it drops back to its normal crawl?


I assume they take it from the neighbors, which is why you never see coaxial cable internet providers advertise upload bandwidth. They only ever state download, and even then, those are also burst speeds, so you assume if you buy 100Mbps down from coaxial you only get 50Mbps or less sustained.

Because they are heavily oversubscribed and don’t want to invest in fiber infrastructure to increase capacity.


Autoscaling seems like a downstream concern from the techniques being discussed here. Autoscaling tends to have a pretty high latency, so you still need a strategy for being overloaded while that extra capacity comes online. There's also a question of how the autoscaler knows what "load" is and when it's "too high." Just going off of CPU/memory usage probably means you're over-provisioning. Instead, if you have back-pressure or load-shedding built into your system you can use those as signals to the autoscaler.


Autoscaling is great, if you solve the problems you rightly mention.

But IMO it's best viewed not as a technique to increase capacity that risks overprovisioning, but rather it should be viewed as a technique to significantly reduce the overprovisioning you were already likely doing to provide capacity that could handle peaks in demand without blowing through delivery expectations (e.g., timeliness, data loss minimisation, etc.)

At an old employer, our load was seasonal over the day. If one instance of an app could handle N req/s, and the daily peak maxed out at 100N req/s, then we had to run 100 instances as a minimum (we usually chucked some extra capacity in there for surprises) even if the mean daily peak was 75N req/s.

And of course, at the times of the day when incoming reqs/s was 0.5N reqs/s, well, we still had 99 instances twiddling their thumbs.

And then there were the days when suddenly we're hitting 200N req/s because Germany made the World Cup quarter-finals, and things are catching fire and services are degraded in a way that customers notice, and it becomes an official Bad Thing That Must Be Explained To The CEO.

So when we reached a point in our system architecture (which took a fair bit of refactoring) where we could use autoscaling, we saved soooo much money, and had far fewer Bad Thing Explanations to do.

We had always been massively overprovisioned for 20 hours of the day, and often still overprovisioned for the other 4, but we weren't overprovisioned enough for black swans, it was the worst of both worlds.

(Although we kept a very close eye on Germany's progress in the football after that first World Cup experience)

You're spot on that

a) to autoscale up effectively we had to minimise the time an instance took to go from cold to hot, so focused a lot on shared caches being available to quickly to populate in-memory caches

b) adding new hardware instances was always going to take longer than adding new app instances, so we had to find some balance in how we overprovisioned hardware capacity to give us breathing room for scaling without wasting too much money and

c) we found significant efficiencies in costs and time to scale by changing the signals used to scale after starting out using CPU/mem.

Also a significant learning curve for our org was realising that we needed to ensure we didn't scale down too aggressively, especially the hardware stuff that scaled down far faster than it scaled up.

We hit situations where we'd scale down after a peak had ended, then shortly after along came another peak, so all the capacity we'd just dynamically removed had to be added back, with the inherent speed issues you mentioned, causing our service to be slow and annoying for customers, with minimal savings while capacity was trampolining.

(This incidentally can be really problematic in systems where horizontal scaling can introduce a stop the world pause across multiple instances of an app.

Anything that uses Kafka and consumer groups is particularly prone to this, as membership change in the group pauses all members of the CG while partitions are reallocated, although later versions of Kafka with sticky assignors have improved this somewhat. But yeah, very critical to stop these kinda apps from trampolining capacity if you want to keep data timeliness within acceptable bounds.)

It took a lot of tuning to get all of it right, but when we did, the savings were spectacular.

I think the CTO worked out that it only took six months of the reduced AWS costs to equal the cost of the two years of system refactoring needed to get to that point, and after that, it was all ongoing cream for the shareholders.

And while I get the hate people have for unnecessary usage of K8s (like Kafka, it's a complex solution for complicated problems and using it unnecessarily is taking on a whole lot of complexity for no gain), it was perfect for our use case, the ability to tune how HPAs scale down, being able to scale on custom metrics, it was just brilliant.

(I wish I could end with "And the company reinvested a significant proportion of the savings into growth and gave us all big fat bonuses for saving so much money", but haha, no. The CFO did try to tell us we'd been unnecessarily wasteful prior and should have just built a system that was created in 2007 like the 2019 version from the start, because apparently a lot of MBA schools have an entrance requirement of psychopathy and then to graduate you have to swear a bloodpact with the cruel and vicious God of Shareholder Value)


> Real products built today have a finite amount of demand, and global cloud capacity is larger than that.

This isn't really true, and it's especially not true when specialized hardware comes into play. If you have a "web-scale" GPU workload, it's not unlikely that you'll hit resource availability constraints from time to time. The question isn't whether cloud capacity is larger than your demand for a particular resource, it's whether cloud capacity is larger than the aggregate peak demand for that resource. Cloud providers aren't magic. They engage in capacity planning, sometimes underestimate, and are sometimes unable to actually procure as much hardware as they want to.


Yeah, I don't think circuit breakers are really the appropriate choice in most of the situations the article is describing. Rate limiting and backpressure seem like better options most of the time.

The way I see it, circuit breakers are safety devices. They're for when you need to keep a system in a safe control region and are wiling to sacrifice some reliability in order to achieve that. e.g. preventing customers from accidentally turning your globally distributed whatever into a DDOS platform or limiting the blast radius when infrastructure automation decides it should delete everything.


Nested replies are definitely better as a way of consuming a post and its comments once and then never thinking about it again. For an asynchronous discussion between several people they get unwieldy after a few rounds of replying. They also make it harder to coherently reference points made cross-tree. That plus algorithmic ranking gives a constant feeling of "gotta refresh to see if there's new stuff" that serves a site like Reddit well, but it makes it much harder to have a longer discussion with more back and forth.

Having recently started participating in a community where most useful discussions are on a PhpBB forum, going back to linear posting was actually refreshing. It's easy to stay on top of because you can check in once a day or so and see just the conversations that have updates since you've last checked them. Threads being sorted by most recently updated means you focus on where there's active discussion. And once you've read those things there's no reason to stick around. "That's it! Get back to doing something useful."

Obviously, this doesn't really scale to a community the size of reddit, but I think it's really pleasant for medium-sized communities.


I think you can mostly solve these problems by changing how the trees are sorted rather than eliminating them entirely.

Here's the usability problem: You're 40 pages deep in a 100 page discussion and you see someone asked a question that you also want to know the answer to. How do you weed through the comments in the remaining 60 pages to find only answers to this question and relevant tangents? I've yet to see a conventional forum come up with a solution that doesn't involve a lot of friction or potential for missed information.

It is far too common for a forum's built in search tool to fail me for some reason or another, leading me to manually scan the entire thread. I may be willing to do that work if the information I am seeking is very important, but will I still feel that way if I just have a hunch that maybe I can provide my own input for someone else?


An upside of linear threads is synchrony, you've pointed out. But the linear thread format isn't incompatible with nesting necessarily, with quoting you're effectively having soft-nested conversations. What's lacking is UX that emphasizes this mixed threading style, and thus benefits from the best of both.

It might surprise you that, I think 4chan explores mixed threading like no other website does. Threads are linear, but quoting results in backlinks meaning isolated conversations are easily navigable, filtering out the "firehose". There's also a button that looks like [-] that can hide a post and all its nested replies, this can hide a specific reply chain from the linear view of the thread. I gave the system a bigger eulogy over here: https://news.ycombinator.com/item?id=33567593


> A doctor knows what his practice is worth and wants every cent he can get out of it - but the next generation of doctor is not going to be able to compete with debt financing what a PE cash-buyer can get.

In my opinion, the physician in this example is a monster. Profit maximization is a choice, not some kind of moral imperative. Am I supposed to have any respect for somebody selling out their employees and patients to vampires so they can retire to a beach or whatever?

The solution I'd want to see for situations like this is to find a way to sell to the people who have a continuing interest in how the business is run: employees and customers. The "exit" that does right by all interested parties would be something like having a newly formed employee coop gradually buy out the founder's ownership stake. To make a tech analogy, you don't have to sell your 0-day to foreign government just because they pay more than the bug bounty program! You don't have to sell out your community to vampires because they're the highest bidders! This is a choice that somebody is making.


> In my opinion, the physician in this example is a monster. Profit maximization is a choice, not some kind of moral imperative.

They've had a career of treating patients. I think they've satisfied the moral imperative already. And selling to PE doesn't necessarily equate to "profit maximization". It could just mean "decent sale". As the OP said, often there simply isn't anyone available to buy it out at the timeline it needs to be bought out.

> The solution I'd want to see for situations like this is to find a way to sell to the people who have a continuing interest in how the business is run: employees and customers. The "exit" that does right by all interested parties would be something like having a newly formed employee coop gradually buy out the founder's ownership stake.

I don't doubt that this can work (it has for other businesses!). However, a given doctor wants to retire soon. Can you point him to a concrete plan to set this up? As in a firm that will have said plan ready, does all the legal work, and manages the terms with the existing employees/customers? The doctor already has his hands full treating patients and running the business.

If you cannot point him to such a resource, then do you see why he'd just sell to PE?


> They've had a career of treating patients. I think they've satisfied the moral imperative already.

No offense (really), but speaking from professional experience, I think this is naive and contributing to a lot of reasons why healthcare costs have started to spiral out of control. There are moral doctors, and immoral ones, and everything in between.

I'm not really sure why healthcare provision as an economic transaction is necessarily more moral than any other economic transaction that brings net benefit to the receiver of the service.

If there was such a cost to the physician overall, in terms of altruistic cost-benefit balances, they'd have trouble finding people wanting to go to medical school. I think the labor markets (in terms of medical school supply and demand) speak to the nature of that balance.

I don't want to demonize doctors (my family and myself fall into these categories) but I think it's dangerous to idolize them at the same time.


Also from experience, I think it's Baumol's cost disease really starting to show. Wages/practice costs for doctors have increased because everything around them is more expensive.

A family doctor in my hometown used to be able to afford a house in the nicest neighbourhood on their earnings. They didn't have to talk about money, and when they sold their practice they were happy to make sure it went into the right hands and sold it for a reasonable amount of money.

These days a family doctor can get a 2 bed condo or a townhouse near their practice, or they can have a long commute with something larger. After paying for office expenses, childcare (which also suffers from Baumol's cost disease) and their more expensive education, there's far less for retirement. You really have to maximize what you get out of your practice when you sell it.

I know there's a long history of it being "a calling" and expecting sacrifice. That's still expected, and yet the same rewards aren't there as in the past. Nobody in the past looked like a greedy asshole because they didn't have to ask for more money or really worry about anything. It was set up your practice and live your life on autopilot.


I want to demonize doctors.


> Am I supposed to have any respect for somebody selling out their employees and patients to vampires so they can retire to a beach

Nobody else will buy it. The alternatives are shutting down the practice or forcing oneself to keep working. Particularly in medicine, the latter is dangerous.

> solution I'd want to see for situations like this is to find a way to sell to the people who have a continuing interest in how the business is run

Sounds like a buy-out strategy! Seriously. Berkshire Hathaway could negotiate good deal terms by making this promise.


> Nobody else will buy it. The alternatives are shutting down the practice or forcing oneself to keep working. Particularly in medicine, the latter is dangerous.

Can't you just hire a "CEO" who will take over all management responsibilities?


> Can't you just hire a "CEO" who will take over all management responsibilities

You, the retiring doctor, hire a manager whose job it is to hire another doctor? Do you not see why this works at scale but for one practice?


Well, keep in mind that from the perspective of the rest of the healthcare industry, private practices are kind of dinosaurs at this point. They don't play well in our modern system, and the last 50 years of healthcare legislation has done everything short of outright banning them.

New doctors are not trained or expected to run businesses. Getting money from medicare or insurance is a nightmare. Patients want access to more services than ever before. The entire idea of a private practice is anathema to modern ideas of healthcare oversight and access equity. Even getting private malpractice insurance is almost impossible now.

There is a reason almost nobody is starting private practices anymore. So to give the doctors a bit of credit this is a chance to slip quietly into the night.


This explains quite a lot about why health care has been noticeably getting worse over the last several years -- for everybody.

In my part of the US, you can't even find a doctor that is taking new patients at all, private practice or otherwise. Talking with some doctors, it's clear that being a doctor these days is a very undesirable job. I know I wouldn't want to do it.


The system and incentives the US voters have set up says "we want you to sell to the vampires, and will give you a huge financial bonus if you do".

Maybe if voters didn't want such system, they might vote different people into power, or vote to change the system.

POSIWID. https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...


It's not "general purpose computing" maintenance, but the service you're talking about does exist, though AFAIK mostly at the very high-end. It's typically for things like home theaters, whole-home audio systems, or smart-home type setups (predating and now merging with current consumer IoT/home automation platforms). Not sure how much that's "maintenance" in the typical sense so much as support for their custom install work, but I bet if you had a Sonos or Lutron system where your installer went out of business you'd be able to find a different guy to deal with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: